Skip to main content
SearchLoginLogin or Signup

Introduction

Published onJun 22, 2022
Introduction

Algorithms that work with deep learning and big data are getting better and better at doing more and more things: They quickly and accurately produce information, and are learning to drive cars more safely and reliably than humans. They can answer our questions, make conversation, compose music, and read books. And they can even write interesting, appropriate, and—if required—funny texts.

Yet when it comes to observing this progress, we are seldom completely at ease—not only because of our worries about bias, errors, threats to privacy, or malicious uses by corporations and governments. Actually, the better the algorithms become, the more our discomfort increases. A recent article in the New Yorker describes one journalist’s experience with Smart Compose,1 a feature of Gmail that suggests endings to your sentences as you type them. The algorithm completed the journalist’s emails so appropriately, pertinently, and in line with his style that he found himself learning from the machine not only what he would have written, but also what he should have written (and had not thought to), or could want to write. And he didn’t like it at all.

This experience, extremely common in our interactions with supposedly intelligent machines, has been labeled the “uncanny valley”:2 an eerie feeling of discomfort that appears in cases where a machine seems too similar to a human being—or to the observer themself. We want the machine to support our thoughts and behaviors, but when we find what appear to be thoughts and behaviors in the machine, we do not feel comfortable. Today, each of us customarily communicates with automated programs (bots) with little attention given to their nature—when we buy plane tickets online, when we ask for assistance on the web, when we play video games, and on many other occasions.3 Nevertheless, when we reflect on or debate the subject of algorithms, we still find ourselves discussing topics such as the possibilities of a machine passing the Turing test,4 the arrival of a technological “singularity,” or the creation of a superintelligence far beyond human abilities.5 We compare ourselves to machines, and we don’t like it if they seem to be winning. In our endeavors to build intelligent machines, we do not just wonder whether we have succeeded, but if the machines are becoming too smart.

But is this really what we have to worry about? While we may get an eerie feeling around machines that resemble us a little too closely, should we say that the fundamental risk of algorithms is that they might compare or compete with human intelligence? This book starts from the hypothesis that analogies between the performance of algorithms and human intelligence are not only unnecessary, but misleading—even if the reasoning behind them appears plausible. Today, after all, many algorithms seem to be able to “think” and communicate. In communication as we know it, our partners have always been human beings, and human beings are endowed with intelligence. If our interlocutor is an algorithm, we impulsively attribute to “him” or “her” the characteristics of a human being. If the machine can communicate autonomously, one thinks, “it must also be intelligent,” although perhaps in a different way than humans. On the basis of this analogy, research has focused on the parallels and differences between human intelligence and machine performance, observing their limits and making comparisons.6 But is it really advisable to continue following this analogy?

That we can communicate with machines, I argue, does not imply that they have their own intelligence that needs to be explained (an explanation that may also require explaining the mysteries of “natural” intelligence), but that, foremost, communication is changing. The object of study in this book is not intelligence, which is and remains a mystery, but communication, which we can observe and about which we already know a great deal. For example, we know how communication has changed over centuries and with the evolution of human society. We know that communication has moved from simple interactions between parties sharing physical space to more flexible and inclusive forms, which have also allowed communication with previously inaccessible partners distant in space and time, in increasingly anonymous and impersonal settings.

Within the evolution of communication, the role of human beings has changed profoundly. Today there is no need for partners to be present; there is no need to know who they are and why they communicate, nor to know what they mean and to take it into account. We can read and understand the instruction booklet of a dishwasher without knowing who wrote it and without identifying ourselves with the writer’s point of view; we interpret a work of art without being bound to the perspective and intention of the artist.7 There is no need for most information to be stored in someone’s mind (nobody knows the civil code by heart), and in all cases of fiction, we identify with the characters of novels and films knowing that they never existed and that they are not the authors of the communication they carry along. The idea of successful communication as a precise sharing of identical content between the minds of participants has been unrealistic for many centuries, in practice if not in theory. In most cases, issuers and receivers do not know each other, do not know each other’s perspectives, contexts, or constraints—and do not need to do so. On the contrary, this lack of transparency allows for otherwise unthinkable degrees of freedom and abstraction.

That communication changes its forms is not new and is not an enigma. Rather, the issue is identifying and understanding the differences and continuities between forms old and new. Today, the autonomy of communication from the cognitive processes of its participants has gone a step further. We need a concept of communication that can take into account the possibility that a communication partner may not be a human being, but instead is an algorithm. The result, already observed today, is a condition in which we have information whose development or genesis we often cannot reconstruct, yet which is nevertheless not arbitrary. The information generated autonomously by algorithms is not random at all and is completely controlled—but not by the processes of the human mind.8

How can we control this control, which for us can also be incomprehensible? This is, in my opinion, the real challenge that machine-learning techniques and the use of big data pose to us today.


The chapters of this book elaborate on this perspective while investigating the use of algorithms in different areas of social life. What do we see, not see, or see differently, if we consider the workings of algorithms as communication, rather than intelligence?

The book opens with a discussion on the adequacy of the classic metaphor of artificial intelligence, as well as derivatives such as neural networks, to analyze recent developments in digital technologies and the web. The latest generation of algorithms, which in various forms have given rise to the use of big data and related projects, does not try to artificially reproduce the processes of human intelligence. This, I argue, is neither a renunciation nor a weakness, but the basis of their incomparable efficiency in information processing and in their ability to interact with users. For the first time, machines are able to produce information never before considered by a human mind and act as interesting and competent communication partners—not because they have become intelligent; instead, it is because they no longer try to do so. The processes that drive algorithms are completely different from the processes of the human mind, and in fact no human mind nor combination of human minds could reproduce them, much less understand algorithmic decision-making processes. Yet human intelligence remains indispensable. Self-learning algorithms are able to calculate, combine, and process differences with amazing efficiency, but they are not able to produce them themselves. They find the differences on the web. Through big data, algorithms “feed” on the differences generated (consciously or unconsciously) by individuals and their behavior to produce new, surprising, and potentially instructive information. Algorithmic processes start from the intelligence and unpredictability (from the contingency) of users to rework them and operate intelligently as communication partners, with no need to be intelligent themselves.

The subsequent chapters explore the consequences of this condition in practical work with algorithms. In chapter 2, I trace the proliferation of lists in digitized societies to a fact about lists known since antiquity: they make it possible to manage information one does not understand—possibly producing new information as a result. I analyze use of visualization in the digital humanities in chapter 3 as a technique to make meaningful the results of the incomprehensible procedures of algorithmic text processing. Chapter 4 deals with digital profiling and algorithmic individualization, which implement paradoxical forms of standardized personalization and generalized contextualization, thereby redefining the meaning of “context reference” and “active public.” The enigmas inherent in the attempt to realize a technique of forgetting through algorithms (“remembering to forget”) are the focus of chapter 5, which discusses the possibility of using algorithms for this purpose precisely because of their peculiar inability to do so. Finally, chapter 6 queries the consequences of digitization on the use of photographs, which today seem to be produced to escape the pressure of the present rather than to preserve experiences as memories.

The book closes with an analysis of algorithmic prediction in chapter 7, which wraps up my exploration by returning to intelligence and its digital forms. In the wake of the increasing lack of transparency of increasingly efficient algorithms, the idea is emerging that machines are incomprehensible primarily because there is nothing to understand—and there is nothing to understand because machines do not understand. Algorithms seem intelligent not because they can understand, but because they can predict. As Ilya Sutskever, chief scientist at OpenAI, explicitly states in reference to software for automated writing: “If a machine . . . could have enough data and computing power to perfectly predict . . . that would be the equivalent of understanding.”9

Prediction is the new horizon of research on artificial forms of intelligence, in a context that radically changes the terms of the question: when you work with algorithms, the issue is not explaining but predicting, not identifying causal relationships but finding correlations, not managing the uncertainty of the future but discovering its structures (patterns). Yet the world remains uncertain, the future remains open, and the use of algorithms must still be explained. It is here, in my opinion, that the issue of control and the challenge of algorithms arise today—of how to manage the impact of their meaning-independent procedures in a global society in which meaning, contingency, and uncertainty are still precious resources.

Bologna, February 2021

Comments
0
comment
No comments here
Why not start the discussion?