If the web is a communication medium, today it is not a mass medium in the traditional sense. Broadcast mass media communication is standardized (providing the same content for each user) and generalized (addressing everyone). What appears on the screen of our computer or smartphone, or what our personal assistant tells us, instead, is different from what anyone else receives. We are addressed by name and informed about restaurants and happenings in our surroundings, or about sporting events that may interest us; we are notified of our appointments, of traffic conditions on the routes we take, or of birthdays of friends and relatives; we receive music playlists and movie suggestions matching our tastes. We come to know what happens in the world through the tailored format of our news feed, and when we look for information, Google presents us with results especially selected for us—as well as with a multitude of commercial ads that are supposed to specifically meet our wishes.
Whereas mass media communication is anonymous, communications on the web are increasingly personalized. Being personally addressed by machines, however, is different from being personally addressed by actual persons. Algorithms do not know us nor do they understand us, yet profiling techniques make it possible to provide each user (a reader, a viewer) with targeted information related to their interests and needs. In both cases (analog and digital), the outcome is a specific message for a single recipient; but algorithmically constructed profiles have very different compositions from the kinds of personalization used by human communication partners—and very different results. A lively debate is currently investigating the forms that this difference takes in digital communication.1 Alexa calling us by our name, to which we respond by asking her for advice, is not the same as a conversation with a friend or colleague—but in what ways, and with what consequences? Does this form of “de-massification” in media create space or expand it for the self-realization and individualization of users? Could it be doing the opposite?2
The participation of algorithms in communication raises new issues concerning the role of those on the receiving end, and the meaning of personalization in general. Is communication personalized if the receiver actively intervenes in and shapes this process, or shall we speak of personalization as something that directly addresses the individual context or perspective of the receiver? In the first case, the user themself personalizes the message they receive; in the second case, this message is personalized by someone or something else. Are we personalizing or are we personalized? Or perhaps depersonalized?
In traditional mass media communication, the difference between the two options is elusive, since the different dimensions of personalization mentioned are, for these media, overlapping—if not absent entirely. All mass communications are standardized (they cannot be changed by the users, who passively receive them) and generalized (they do not refer to the context or the perspective of any one receiver). On the web, however, algorithms can affect personalization in both directions, addressing different communications to groups of users with different interests, or with consideration of the concrete situation of each receiver. To investigate this hypothesis, I focus in this chapter on two different (and potentially complementary) forms of “de-massification” used in algorithmic profiling: the identification of specific groups of users through behavioral profiling and collaborative filtering, and the addressing of situations of single users through context-oriented systems. The outcome, I argue, is an unprecedented combination of profiling and active intervention by individuals, a state of affairs which is further defined and discussed in the last section of the chapter.
Our media world was transformed in the early 2000s by the arrival of Web 2.0,3 a technological innovation that led quickly and seemingly spontaneously to a cascade of further innovations in communication and practices of identity. The initial change itself was somewhat minor—the infrastructure of the web remained the same as it had been for “Web 1.0,” being based on the TCP/IP communication protocols—but the programming technologies used to create documents altered more radically. Moving beyond the then-standard HTML, which is used to produce static hypertext documents, programmers began to also use tools such as Ajax and Adobe Flex, which allowed for the creation of more dynamic pages, open to the contributions and interventions of their visitors. The result was disruptive, as it soon became clear that Web 2.0 had brought with it unprecedented forms of participation and openness that themselves quickly gave way to previously unthinkable forms of communication, including: the contemporary universe of UGC (user-generated content), which involves blogs, wikis, and more modern content-sharing services like YouTube or Flickr; the proliferation of tags (indices of content through keywords); the multiplication of aggregators like Google News and the Huffington Post; and, of course, the entirety of what we now know as social media.
Many applications that used to run on the user’s computer are now run on web servers that allow for cloud computing, which is the dissolution of the web into a nebula of computers and interconnected archives accessible to everyone through computing devices that are themselves almost devoid of software and data. As was observed more than a decade ago,4 this move transformed the World Wide Web into a World Wide Computer, one that harnesses its processing power and data from each of its interconnected devices in an eternally fluid, continuous process of updates and revisions (i.e., it is “permanently beta”).5
This turbulent universe was given names such as the “participatory web,” underlining the unprecedented involvement of users, and the expectation that this would cancel the distinction between sender and receiver. Emblematic of this approach is the figure of the “prosumer,” who at the same time, and by the same means, uploads and downloads content. This began in the early 2000s with communication protocols for peer-to-peer sharing such as BitTorrent and eMule, in which a “swarm” of hosts can upload to/download from each other simultaneously. Users who download files containing songs or video clips can at the same time offer their files (and by extension the use of their storage capacity) to other users.
The move from participation to individualization came soon after, a shift that led Time magazine to proclaim “You” its person of the year in 2006. It was widely believed that, through user participation, the web would allow everyone a more fully developed, individual experience online—a uniqueness that had hitherto been impossible due to technical and other constraints. Web 2.0, open to all, would be a world of unsurpassed individualization. Wasik speaks of this in terms of a “celebration of the self”: individuals can configure their media world to their liking and according to personal interests, in a manner that best expresses individuality.6
It seemed then that we would soon be rid of the outmoded category of the passive consumer. In the new “architecture of participation,” no one would be just a consumer anymore.7 A more independent and active model of the individual would emerge,8 marking the “end of the couch potato era” that characterized mass culture.9 According to this interpretation, the open and interactive World Wide Computer would overcome the asymmetries of broadcast media, in which the position of its (many) receivers was neatly separated from that of its (few) broadcasters, and “downloads” (onto televisions, radios, etc.) were immensely more numerous than uploads.
This interpretation assumed the active role of participants would transform all familiar forms of communication. Journalism would move from a lecture model to that of a conversation or seminar, which would involve the audience configuring, selecting and often actively producing the news.10 The one-to-one marketing model would establish a learning relationship between producers and consumers, who would get “exactly what they want—when, where, and how they want it.”11 Advertisements, which in their traditional forms had suffered a progressive loss of effectiveness, would move toward targeted ads, including personalized banners on web pages oriented toward users’ individual interests, tastes, and preferences. Indeed, in the most advanced forms of direct marketing, consumers would voluntarily produce their own ads for themselves by interacting with games and virtual worlds made available by companies.12 Static narration and fiction would evolve toward the new generation of interactive stories, steered by choices made by their audiences.13
Is that what happened? After almost two decades of experience, we can see that these predictions have been confirmed and refuted at the same time. There have been transformations, yet their consequences are more complex than expected—and in many cases different altogether.14 Today’s news media are certainly more personalized and decentralized, but also hampered by forms of users’ isolation like filter bubbles and echo chambers, not to mention the unavoidable issue of fake news.15 Online advertising is affected by a growing “banner blindness” in which users, instead of looking at customized ads, try to avoid or ignore them.16
Traditional forms of fiction, instead of disappearing, have multiplied in the new model of on-demand streaming services which, while allowing users to experiment with how they consume media, almost never allow for direct audience intervention as a story progresses. Interactivity in fiction, although technically possible,17 remains rare. The case of the Ukrainian TV series Servant of the People, instead, shows an intertwining of fiction and reality that goes beyond the familiar condition in which observing reality is unconsciously influenced by mirroring in fiction, with real consequences. In the Servant of the People model, the consequences are conscious and deliberate: the members of the audience (who are Ukrainian voters) choose to make the fiction real. The series presents the vicissitudes of a high school history teacher who is indifferent to politics, yet ends up elected president of the republic. After three seasons of the TV series, the actor who plays the protagonist, Volodymyr Zelensky, was elected president of Ukraine in spring 2019, leading a party with the same name as the series. In a sense, Ukrainian voters decided to enter the mirror.
The future often holds surprises, but in hindsight, we can see that predictions from the early years of Web 2.0 were significantly misguided. Prognosticators assumed that active personalization was preferable and would be looked for wherever possible, while standardization (having the same communication for everyone) would only ever occur due to technical constraints of earlier media, and was destined to disappear with digital progress. According to this view, the audience would always want to be proactive in shaping its media world, and had become passive “couch potatoes” only because the medium did not allow for anything else. Technological innovations related to digitization would finally offer the possibility of satisfying the desire of citizens to always be creative and original, as active users wanting personalized communication.
It didn’t happen that way. Today we see that the possibilities of personalization did not eliminate standardized communication. Instead, new combinations of activity and passivity and individualization and anonymity in audiences were to arise. The presumed contrast between personalization and standardization, however suggestive, proved to be too simple an explanation. Personalization is not always useful, or even desirable, and the medium of standardized communication can still provide creative, autonomous offerings.
Standardized broadcast media, which does not allow for intervention by the individual, also has the power to select the topics that will become a common object of attention. In making the same message available to all members of their audience, they let everyone know what others know.18 The issues discussed in traditional mass media can be taken for granted regardless of the opinions, orientations, and idiosyncrasies of each individual. This minimum reference is the basis for the establishment of a public sphere and a collective reference. As a consequence of this mass media, I would argue, people are not only informed about the issues that interest them and that they would actively look for, but also about topics they have little to no interest in—and this is an amazing performance.
The standardized communication of mass media, moreover, can offer ample space for personal configuration. The individual reader of a book can decide for themself the rhythm, the speed and the order of reading; they can slow down or accelerate, go back, start from the end, or skip passages, and compare the text with other texts that confirm, contradict, or integrate it. In doing so, each reader produces a specific communication, corresponding to their characteristics, interests, and knowledge, and different from that of any other reader.
Personalized communication can be oppressive, while standardized communications, which are the same for everyone, can allow individualized users to be active and autonomous—something we can clearly see today. Whereas mass media communication does not require that we grasp and develop the variety of approaches between individual autonomy and collective reference, the intervention of algorithms has the effect of unfolding the complexity of possible communicative forms with the diverse combinations of anonymity and personalization that we observe today: filter bubbles, selfies, flash mobs, influencers, social media, targeted shopping, reverse profiling, avatars, and many other unprecedented patterns. To analyze this variety we need a more articulated range of dimensions, expressing on the one hand a reference to the individual context of the receiver (or the lack thereof), and expressing on the other hand the receiver’s active intervention (or the lack thereof).
In our digital society the configuration of communication is changing. Unprecedented forms of communication relying on the active role of algorithms are being tested, and the media landscape of society is transforming. In the following pages I explore these recent developments, with reference to the concept of virtual contingency introduced in chapter 1. The concept indicates the ability of algorithms to exploit the behavior and unpredictability of users to learn and act on communication in complex and appropriate ways. Algorithms, which are not, and must not be, intelligent, use big data to feed on the intelligence of users and to learn to act as smart and engaging communication partners—and also to address individual communications to each of us. In digital communication, I argue, virtual contingency produces an unprecedented interweaving between activities of users and generalized references, yielding innovative configurations.
From the perspective of the user, traditional mass media communication could be personalized only if audience members actively intervened and configured the messages. If the communication one got was to differ from that of others, one had to take steps to personalize it. Instead, algorithms today can take charge of this process. In many web services, each user receives content or messages different from what others are receiving—without “doing” anything in the conventional sense. Personalization of communication does not necessarily require active receivers anymore.19
From the perspective of the sender, traditional mass media communication was either directed toward everyone—being general and noncontextual—or it targeted a specific person at a precise moment in time in a manner that was ill-suited for other recipients. Today’s algorithms, instead, can provide specific references through completely automated, generalized procedures that do not even require personal information such as names or addresses. Awareness of this possibility spread in the general public during a case in which the retailer Target, in identifying a pregnant woman before her parents knew about it, showed that it is possible to reconstruct precise information about a person using only anonymous data available on the web.20 Communication can be addressed to everyone, and yet can also refer to the specific context of each receiver.
Traditional distinctions implode in this process. New forms of digital communication seem to produce a paradoxical form of mass personalization and generalized individualization—specific and local, for everybody, everywhere.21 The paradox, however, is resolved if one considers the new agents participating in communication: algorithms. To describe and explain the resulting forms of communication, we need to account for their active role.
In fact, profiling techniques that rely on algorithmic procedures are developing new ways of dealing with individuals. They can address individuals as tokens of a class (“you and others like you”), or they can refer to them on the basis of their specific activity and context (“where you are and what you do”). The corresponding forms of personalization are very different.
With automated recommendations, for example, systems based on behavioral profiling are distinguished from context-oriented systems.22 The former target a user’s active participation on the web as representative of their interests, while matching these to the interests of other users rated as similar to them. Developing classic statistical segmentation techniques, these systems focus on increasingly restricted groups, ultimately targeting the individual. The availability of huge quantities of data from different sources makes it possible to segment a group more and more, ideally going as far as ending up with a segment of one. Through big data and virtual contingency, algorithms use prior behaviors of users and the behaviors of others to provide information that matches (or is assumed to match) one’s specific interests on the basis of past choices and of the interests of “you and others like you.”
In context-oriented systems, on the other hand, the focus is on the situation and the intent of individual users.23 If you are looking for food in Naples in summer, you get recommendations for pizza and salad.24 Here too the algorithms use huge amounts of data, yet these data are generated within a given context, provided by various sensors (from smartphones, The Internet of Things, etc.) and by other local sources. In this kind of system, “context may include the time of the day, the location of the user, the device used to access information or the companion with whom an activity is undertaken.”25 A user receives recommendations based on what is occurring around them in the moment and on what they are trying to accomplish—that is, based on “your situation” instead of that of “others like you.”26
Of course, profiling techniques can combine both systems to target their users.27 Nevertheless, the two approaches are conceptually different—and in both cases, receivers can adopt a passive or an active attitude. To understand the forms and social consequences of algorithmic profiling, we must distinguish the corresponding possibilities in a new frame of reference.28
The table below presents my proposal for describing digital communication along two dimensions of profiling, according to the activity of a group of users (behavioral profiling), and according to the specific situation of the single user (contextual profiling).
Let’s start with purely behavioral profiling—represented here by the top right corner of the table—which selects communications addressed to the members of a group identified through collaborative filtering (for “people like you”). Each user shares these communications with other people in different situations, as was the case in broadcast mass media.29 Communications are generalized, although in this digital form, not to everyone. When the single user gets a news feed, for example, the generalized components of algorithmic communication no longer refer to the public as a whole, but only to one segment—those people connected with that user by profiling techniques. The generalized reference is thus not the general public.
This issue is widely discussed in the debate on filter bubbles. The expression, introduced by Eli Pariser in The Filter Bubble: What the Internet Is Hiding from You, is based on observations of the participatory web, and in particular on innovations introduced by Google in 2009. Since at least 2009, Google has not been delivering the same search results to everyone, but provides information specifically referring to the perspective of those people the algorithm connects a user with. As a result of the filters operating on the web at all levels (with Google, and also with Facebook, Twitter, and all kinds of digital aggregators), the individual audience member is isolated in a sort of cultural bubble preventing her from accessing information that does not agree with her perspective. One does not have to pay anymore (with money or attention) for information that does not hold personal interest: no more overviews of markets in which they lack investments, results for sports that they do not follow, gossip and culture news for which they do not care, and so on. As Herrman observes, in these services, filter bubbles are not an unintended consequence.30 On the contrary, they are the point, corresponding to the idealized end of massified media promised by services such as PointCast in the late 1990s: the narrowing of broadcast communication down to a single user.
These kinds of personalized news feeds and aggregators are rising in use, yet generalized media seems destined to remain. In fact, Freewheel’s 2018 Video Marketplace Report shows that 58 percent of video consumers in the US and Europe still get their content on TV screens (digital or otherwise), and that premium video services are increasing in popularity and importance compared to user-generated content.31 Traditional news media, such as broadsheet papers and magazines, also continue to exist. Indeed, some newspapers such as the New York Times and the Washington Post have been increasing readerships—though often through digital versions with new features and services.32
It would appear that the generalization function of the traditional mass media remains fundamental and has not been fulfilled by individualized news feeds. We are still interested in knowing what others know, getting information that might not interest us personally. In fact, in many cases the most aware and informed citizens find it attractive to go beyond individualized content. Internet companies that offer personalized news services, such as Facebook and Buzzfeed, have recently been moving toward the model of traditional journalism, including having editorial offices with dedicated staff.33 The result is, of course, not a move back to the broadcast model, but one toward new combinations of the activity of algorithms and the passivity of users. Indeed, in some cases, specific “anti-isolation” services are proposed whose functions introduce personalized newsfeed content from political perspectives deemed contrary to one’s own ideologies (such as left-wing or right-wing), with the explicit purpose of mitigating political polarization.34 Filters themselves are filtered against bubbles.
Returning to the table above, at the bottom left corner, we find the inverse to purely behavioral profiling in purely contextual profiling, in which the individual user receives messages tailored to their specific situation in space and time. Actively exploiting context-orientation, users can configure their communication and experiment with innovative ways of observation and self-observation.35
The ubiquitous phenomenon of selfies, for example, demonstrates one way in which the presentation of self in public can be transformed using digital techniques.36 A selfie is not simply a photograph of oneself, like one might create using a timer on an analog camera. The automatic timer records the image from the perspective of someone else observing us: we see how an “other” sees us. In most cases, instead, the selfie is produced by way of a specific function offered by the smartphone that uses photo software to invert the image so that it looks like what one would normally see in a mirror.37 The selfie then records the self-image that each of us sees in the mirror, rather than an external image, and this image is immediately posted on the web and shared with others.
Selfies are a typical example of social photographs—“everyday images taken to be shared”38—and are used to create a digital equivalent of the presentation of self that occurs in real face-to-face interactions.39 We build our identity by seeing ourselves through the eyes of others, yet now what others see of us is the image we choose to present, one often processed with software tools: I “see me showing you me.”40 The user of these digital technologies actively configures a self-presentation, which becomes the basis of external observations (likes, tags, followers and other forms of digital feedback) from which the user learns who they are.
How does this condition affect the constitution of personality? Strands of research are already exploring this question in sectors relying most heavily on digital communication. A study by Formilan and Stark, for example, addresses the interesting phenomenon that electronic artists will often have many aliases—up to a dozen or more.41 These aliases, with which an artist makes themself known to their public, are different from traditional pseudonyms, stage names, or the masks that, according to Erving Goffman, we wear to present the different aspects of our individuality. Like everyone else, electronic artists possess an individuality, even if it involves multiple representations, and are aware of it. Through their aliases, however, they experiment with alternative digital identity constructions that do not fully belong to them since their audience contributes in constructing them.
Aliases are “projected identities,” “trial balloons” launched into the digital world in order to produce feedback that artists can acknowledge and elaborate upon. Through their aliases, artists learn who they are from their interactions with audiences—a process of continuous curation that leads digital identities to change, consolidate, or even disappear. There is nothing authentic either at the beginning or at the end of this process of mirroring and differentiation, insofar as, in more than one case, the artists decide to take their given name as their alias or one of their aliases.42 Jesse Abayomi (real name), known in the electronic music scene as Zone 3 and Iroko, finally chose Abayomi as an additional alias,43 reached through an identification path involving his audience. It is as authentic as any of his other aliases44—or as any of the so-called white labels under which electronic artists release tracks with anonymous identities. Digital audiences can also take advantage of the intervention of algorithms in communication to actively experiment with innovative forms of belonging and detachment, recognition and rejection.
The two types of profiling discussed can be combined into forms of algorithmic individualization—top left in the table—yielding communications that are both contextualized (according to the situation of the receiver), and personalized (referring to their individual behavior and the behavior of similar people). Particularly since the adoption of sophisticated machine-learning techniques, the intervention of algorithms makes it possible to offer to each user a specific message, one that matches their interests and is tailored to their specific context. Anyone registered in Facebook automatically receives personally contextualized content when accessing their personal web page, alongside the posts of digital friends. The same happens in online music, e-commerce recommendations, e-learning, news, and tourism systems, including advertising and various forms of targeted offers.45 Two users doing the same search on the same site get different individualized answers on their screens, referring to their interests, their behavior, their location and their moment in time—without any active intervention involved.
This “real-time individualization” of a site to suit a visitor’s unique needs relies on the use of contextual data and on segmentation of the universe of users based on increasingly detailed information produced with behavioral profiling.46 It is a kind of individualization in which the receivers are no more active than the couch potatoes addressed by generalist media, yet they get a personalized communication tailored to their situation, tastes, and inclinations. Users do not personalize, they are personalized.
We are dealing with a form of web communication that combines context-oriented and behavioral profiling, a form that does not depend on user intervention, yet is contextualized and different for everyone. Several researchers have been investigating this, using labels like “new algorithmic identity,” “data subjects,” and “algorithmic individualization.”47 Nothing is personal in these forms of personalization.48 Our identifications do not rely on our essential features or on the inherent characteristics by which we recognize ourselves. The focus shifts to our history of interactions with the web, and to identifications that are rather “made for us” through statistical models based on sensors and on web use.49 Even if these digital identities start from the active behavior of users on the web, the role of their subjects ends up in a form of “interpassivity” in which individuals are “enacted” as “data doubles” they do not control.50 The resulting form of individuality is deeply different from the modern one in which everyone actively observes, tests, and recognizes his or her specificity: “on personalized platforms there are in fact no individuals, but only ways of seeing people as individuals.”51
Accomplished algorithmic individualization could be seen as the full realization of the fantasy of the participatory web of the 2000s, which promised to acknowledge the uniqueness of each user. Now that we inhabit a properly individualized web, we have come to understand that, in addition to the advantages it provides in everyday life, this technology also has many dark sides.52 As Pariser argues, having access to information often no longer means having access to a shared world, and instead involves an increasingly sophisticated exploration of a more or less extended individualized world.53 Without a common point of reference, we would not know what others know or do not know—nor indeed would we be able to judge our own ignorance on the matter. The problem is not so much the management of knowledge but the management of “un-knowledge.”54
In the personalized web each user accesses their own specific content: a user sees things that many others do not see, while often not seeing the things that others do.55 Individualization not only affects the way the world is presented to the observer, it also modifies the world itself. Realizing this effect can trigger feelings of rejection, transforming what was otherwise a sense of empowerment into one of passivity and impotence. Users tend to think, “This is creepy” instead of “This is helpful.”56 In these cases, individualized communication does not make you feel unique and productive, but isolated and “massified.”
People often take action against the excessive interventions of algorithms, yet they also often do so by resorting to other algorithms. In the last few years, the use of ad-blocking software—specific forms of algorithms that protect users from web page advertisements—has been spreading rapidly.57 This creates a paradoxical condition in which the individualization of users tends to block the very conditions that make this possible.58 Ad blockers, in fact, operate by preventing cookies, pop-ups, embedded video and audio, and especially those tracking devices that detect data related to the individual user. The individualized user of the participatory web, then, blocks the very production of big data that feeds the virtual contingency of the algorithms, blocking the individualization of communication.
In opposition to algorithmic individualization, where particular combinations of different profiling techniques produce a situation of passive user customization, web communication may still enable users to individuate themselves on their own terms.59 Owing to the decentralized and open nature of much of the web, many of the profiling tools used by algorithms are observable for users, who exploit them to create a sort of reverse personalization—represented here in the bottom right corner of the table—in which they actively configure their communication.
One example of this comes from influencers on the web who address an audience that is also expert and watchful.60 The audience of the participatory web is active users who dig into the web and discover the rules behind the behavior of its participants (senders and receivers)—and therefore also of themselves as users of the medium: as Wasik puts it, “the participants become their own show.”61 In many cases users deal with the web knowing that the relationship is shaped according to the data of “people like them,” and stage this circularity. “Tribes” on the web experiment with the ways in which they observe themselves. Typical digitally triggered phenomena such as flash mobs—as originally described by their inventor, and carried out—lacked content, as participants were aware.62 The point of the show was no show at all, “pure scenes,” where the participants observed themselves observing the event.
In the same way, prosumers who upload content on the web are largely not amateurs and do not naively transfer their personal data such as holiday memories onto YouTube without observing how they will be observed. The declared goal of the vastly successful social networking service TikTok is to stimulate and support users’ creativity, freeing them from technological difficulties and offering a place where everyone can become active participants.63 The basic challenge of these services is how to get people to engage with them.64 These users are mostly people acutely aware of being observed, who act on the basis of a meta-understanding of digital communication and its mechanisms. The result is a mass communication in which “the consumer himself is the Big Brother,” using refined tools to observe himself, others, and their interventions in communication.65
These innovative developments are highly revealing about the meaning and forms of users’ active interventions in communication—and also about the reasons for the failure of certain connected projects that had raised high expectations. Interactive fiction, for example, in which the reader/viewer was expected to contribute to help determining the course of a story, had little success after initial curiosity wore off. Audiences do not seem interested in deciding the plot of novels or movies, even (and precisely) if these readers/viewers can be deeply affected when the story does not go as they wished. Since the modern period, in fact, the value of fiction essentially lies in observing the observation of others, entrusting an invisible author with the creation of a narrated world, its events, and its characters. As such, it is an invented world, and we know it.
Precisely because it is not real, fiction allows us to do something that in “real reality” is impossible: to observe others as if we could read their mind.66 Audience members want to observe how others observe, thereby experimenting with perspectives different from their own, and potentially learning to observe themselves and their own perspective.67 For this purpose, the separation of the fictional world from the real world must be maintained, and with it also the impossibility of intervening directly in the plot. You cannot enter the mirror if you want to be reflected in it.
This kind of fiction still has a fundamental function, although now a new combination is emerging that takes advantage of the intervention of algorithms and that reshapes the distinction between the narrated world and the lived world. Video games, one of the most influential forms of digital communication, use algorithms to offer the users the possibility of active intervention in the game world, developing a highly innovative “grammar of fun.”68 Through virtual contingency, video games go beyond the modern model of storytelling and reading, yielding an active experience for the gamer while still enabling their entry into the mirror of fiction.69
Video games, as with novels, can be designed from the perspective of a character involved in their depicted events (first-person point of view—or POV) or from an external perspective (third-person POV).70 But the player of a first-person POV video game does not only observe the world through someone else’s eyes. Contrary to the basic rule of fiction and the centrality of its perspective, the player also acts in the (virtual) world and lives a particularly immersive game experience71—shooting, hiding, running away from enemies. However, they cannot see themself in the game; typically, the only part of an avatar’s body that the player can see is their hands.72 In a third-person POV, though, the player can see the whole body of their character from a perspective above and behind the avatar. In a game shifting back and forth from first- to third-person POV, a player who identifies with and acts through an avatar can also observe their virtual self through the eyes of another. For the first time, the video game offers a space in which the observer sees with the eyes of another not only the world, but also themself and their own behavior. In the form of the avatar, according to Waggoner the player experiences a “virtual identity” that allows them to be “both self and not-self,” “other and not other at the same time.”73
Communication mediated by algorithms learning from the behavior of users is modifying from within our established forms of standardization and personalization. In addition to the modern distinction between individual and collective (or private and public) references, a new equivalent of the public sphere is taking shape:74 one that follows users’ choices, then processes and multiplies them, and then re-presents them in a form that requires new choices. The result is an unprecedented configuration of activity and passivity in relations between issuers and recipients, which can be exploited by both parties.