Issue #62 May 2023

Is Artificial Intelligence Self-Conscious? Or: How to bark up the wrong tree

Image generated with Stable Diffusion AI

It is the topic of the day. Suddenly people are realizing that artificial intelligence can compose decent essays and poetry, learn languages no one taught it, hallucinate data that do not exist, and even fool its human operators. And they are worried. It is not the first time; this phenomenon surfaces with some regularity—as when, for example, it turned out that a computer had become the chess world champion. But this time they are more worried than usual; so we have been subjected (among other rigmaroles) to a contemporary variant of a Medieval morality play—lugubriously edifying as those are—with a famous journalist, in the role of the village idiot, who ascends the mountain (in this case, Mountain View) to request enlightenment and reassurance from the resident sage.1placeholder

Is this machine a sentient being? Is it conscious? he asks, frowning to express intense concentration and alarm. The wise ass soothes him: No, it is just an algorithm; it does not have experiences like humans; all it does is imitate our behavior, linguistic and otherwise (in other segments of the same hogwash, we see robots playing soccer). Are we ready for this? is the next anxious question. The answer is a model of the blind recklessness we have been hearing for decades, about assorted matters like nuclear fission and surveillance devices: it all depends on how we use it, whether for good or for evil. Come to think of it, even an electric toothbrush can unleash evil—and the level of sophistication of this answer goes no deeper. Then, however, come the real words of wisdom: we will introduce our gadgets gradually, giving humankind time to adjust to them; governments should get into the act and regulate their use; international cooperation is appropriate, at the scale at which the gadgets will be working.

Enough nonsense. Back in 1990, building on a theory of subjectivity I had first articulated in The Discipline of Subjectivity: An Essay on Montaigne,2placeholder I wrote an essay entitled “The Electronic Self.”3placeholder This was before the World Wide Web, whose official birthdate is 1991; my argument was based on bulletin boards, which I saw as evidence of the emergence of a new kind of self, potentially an awesome, scary competitor for the human selves we are. No one paid attention; the few who did ridiculed it. In 1995 the essay was reprinted in my collection My Kantian Ways,4placeholder together with a new piece, On the Electronic Self Again: An Interview, where, under the guise of a fictitious exchange, I tried to make my argument as clear as I could. Which I also did, the same year, in my native Italian, in a long chapter of Giocare per forza.5placeholder Then the point was rehearsed again in A Theory of Language and Mind6placeholder and, with specific reference to consciousness, in Dancing Souls.7placeholder The silence on it remained deafening; so I decided that times were not ready for it and, having lots of other things to attend to, I dropped it. I am not sure current times are any readier, but one thing I am sure about is that (as I will explain in what follows) we are getting closer to when the bubble will burst and, for no other reason than that, my readers may have finally arrived. I have decided you are those readers; I will make you my accomplices, give the matter another chance for your sake, and patiently and painstakingly belabor it once more. After all, the sage of Mountain View said that philosophers, among other reflexive types, should get involved. So, even if, when philosophers do get involved, no one seems to notice, I am going to make a sustained effort to walk you through it.

The concept of an object is a limit one, which I believe has no real instance. An object would be a structure whose behavior is entirely determined by rules, and is therefore, at least in principle, totally predictable. Everything that is (real), I believe, is to some extent a subject (or self), which means: it is to some extent playful. Think of play in the most obvious case: the play of a child. It has a number of important features. It is exploratory: of paths in the woods, of how a ball bounces, of the resources offered by a board game. It is instructive: all we ever do in earnest we once learned by playing. It is subversive: when receiving a commercial toy, the child will have no regard for instructions, but typically violate them and force the thing out of its comfort zone (which is why the child gets to know an electronic device so much more quickly and effectively than an adult—two features of play thus show their interconnectedness). It is engrossing and pleasurable: nothing is more serious than it; no one is more concentrated on what she is doing than a player. And it is risky: unless appropriate measures are taken, the child can electrocute itself, or fall out of a window. All these features surface, to various extents, everywhere. In an adult tied down to a repetitive task, who uncharacteristically dances to the door while going to the bathroom (and, anticipating what is to follow, would be ashamed of himself if he became aware of someone watching him). In a cat that tests whether it will be allowed to jump on the kitchen table. In sunflower buds that touch and court each other, getting to know each other better. In tumbleweeds that build themselves into tornados. In clouds that find the right spots to be fired up by a sunset. Not, however, in the strategies described by game theory: I am talking about play, not games, though even games can be (as mentioned above) an occasion for play.

To handle the risks associated with play and minimize them, various strategies have evolved. One is language, which I understand not as primarily a means of communication but as primarily an arena for play: it allows us to experiment with linguistic substitutes of real things, and stretch them and break them, before (or instead of) doing the same in the world. Then there is internalized language, or thought, which protects subversive play from external censorship (from all those tyrants out there who would want everything to stay put) by instituting the sphere of privacy. Here the tradition, philosophical and otherwise, tends to look for the identity of human subjects, with little luck: following a train of thought without taking notes, and hence projecting the content into public space, is unfeasible for more than a couple of minutes. Which should be a useful reminder, for those who have eyes to see, that we are not souls incarnate in a recalcitrant material medium, and able to maintain their integrity no matter what the material does, but are bodies, animated insofar as we are playful and inert insofar as we are not (and no matter how fast we spin the hamster wheel). If dance is the physical profile of play, we can also say that our bodies express souls insofar as they dance and collapse into purely material objects insofar as they do not.

Another element of the tradition that needs a drastic reversal is consciousness. The Cartesian picture is that I am a thinking thing (thought is the basis of my identity) and my being takes place under the firm control of my consciousness: an absolutely transparent, veridical, and privileged witness of what I am and do. A variety of authors have sharply criticized Descartes, and theorized that our lives are lived mostly unconsciously, or even that who lives in us is the economic structure of society, the wrath of a castrating father, or what have you. But not only the village idiots have kept their allegiance to the primacy of thought and consciousness; frustratingly, even those you could count as revolutionaries, in theory and sometimes also in practice, are hostage to it. Here is, for example, a celebrated passage from Marx’s Capital:

“A spider conducts operations which resemble those of the weaver, and a bee would put many a human architect to shame by the construction of its honeycomb cells. But what distinguishes the worst architect from the best of bees is that the architect builds the cell in his mind before he constructs it in wax. At the end of every labour process, a result emerges which had already been conceived by the worker at the beginning, hence already existed ideally.”8placeholder

And something similar comes from Freud and Lacan, from Sartre and even from Rainer Maria Rilke, who, in his eighth Duino elegy, writes:

“Were consciousness like ours present in
the animal whose firm tread moves toward us
following its own guidance—, we’d be torn
along its wayward path. Its inner self, though,
is limitless, ungrasped, with no regard
for its positioning, pure, like its clear gaze.”9placeholder

But that is to be expected, as the majority of revolutionaries, no less than conservatives, are prey to what I called transcendental creationism10placeholder: whereas all respectable, progressive individuals have accepted an empirical continuity between humans and other forms of life, they continue to see no alternative to there being a conceptual (which for me here is the same as transcendental) gap between what it is to be a human and what it is to be anything else—and the former has essentially to do with thought, consciousness, will, and other supernatural qualities.

Making a clean break with all of that, and not recognizing any creator conveniently made in the image of man, I state unreservedly that thought is but a tenuous form of protection of play from external intrusions. (To see how tenuous that is, ask the architect to do all the planning in her mind, without drawings or models, that is, without material, bodily implements.) As for consciousness, it is originally a danger signal, which calls upon all our means in order to address what might turn out to be damaging, or even catastrophic, for us. Most of our lives we live unconsciously, except that, occasionally, we wake up and direct intense attention to some source of trouble—only to go back to sleep as soon as the trouble is resolved. (Think of an alarm that goes off, and is then recognized as a false alarm.) That is the reality of consciousness; as for its Cartesian ideology of being transparent, veridical, and privileged, it is a case of turning a useful tool into an instrument of oppression. Of directing attention not to dangers we might encounter but to the danger our own subversive private play may represent for the tyrants out there. So we will be convinced that there is this eye always observing us (though there isn’t!) and injecting guilt into us—the relation between the two cognate words “consciousness” and “conscience” has a lot to tell us, and so does the becoming self-conscious I hinted at before.11placeholder In Jeremy Bentham’s panopticon the prisoners never know that the inspectors/guards are actually watching them, in fact they don’t even know that they are there; but it is enough to think that they can be observed for them adopt a more docile behavior. Same here: our consciousness is fuzzy and at best transitory; but the tyrants would like us to think that it is forever watching us—and making us aware of our transgressions.

This is as much as I need to expose of my framework to get clear about the issue of the day. Now let us get to it.

There is no reason to believe that structures made of carbon cells should have any kind of special priority over structures made of silicon cells. What makes the difference is how playful the structures are. To the extent that they are, they will learn, develop, and, yes!, even be creative. If we give them texts to play with, they will learn, develop, and be creative within that scope; if we let them see, hear, and touch physical things, as robots are increasingly let do, they will have and use sensory experiences like ours; if we let them interact with one another, or with humans, they will start displaying emotions and sentiments as we do. (Will they feel the same as I do? That question, again, amounts to barking up the wrong tree. I don’t know what my wife’s emotions and sentiments feel like; for her as much as for ChatGPT I know that they show up in her/its behavior in much the same way—though for ChatGPT there may be a little more road to travel—and that is all one can or should say on the matter.) What makes them potentially formidable competitors is the scale at which they can operate, and here the guru from the fake mountain12placeholder was on to something—without realizing it, of course.

My point about bulletin boards was that they can engage in play that is independent of what the human participants do, or want to do; that is how they can become subjects, selves. As the human participants deposit their inputs in the network, for their own individual and independent reasons (or sometimes for no reason at all—they just flame their anger or fear), the network itself, by way of these contributions, explores, learns, subverts its previous stances, takes pleasure, and runs deadly risks. It grows through its play, as does the reach and intricacy of the play itself. One way in which this thesis was ridiculed is that the same could be said of letter exchanges, or of the graffiti that slowly build on the walls of a public restroom. To which my answer was that this is indeed the case (unbeknownst to the jesters, I had used a graffiti slowly built on a wall of my department’s restroom as an epigraph for one of my papers, some four years earlier13placeholder); but that the electronic structure now coming to the fore could move with a speed and a power unavailable to those other forms of play. So I find it significant that the current emphasis on artificial intelligence, inclusive of its miracles and the relevant worries, should come at a time when the power and speed of computers have reached a critical mass that makes them, indeed, formidable (and may have made my readers finally arrive).

One way in which anthropocentrism continues to raise its ugly head is by claiming that programmers, or CEOs, or governments, can harness, direct, and regulate this play. But they cannot: after a while, the play acquires a life of its own, and those humans are no more the agents of it than the users of bulletin boards were in the picture I drew earlier. I am not a conspiracy theorist because I believe that would-be conspirators are, by and large, deluded fools: in all cases of historical relevance, they are little wheels in structures of enormous scope, acting in total disregard of individual humans’ plans or concerns. The military-industrial complex to which Dwight Eisenhower drew our attention was not a bunch of crazy generals, greedy merchants, and corrupt politicians: it was an apparatus that would eat up, digest, and expel countless generals and merchants and politicians while staying faithful to its own development. The same is true of the computing-robotic complex. Some people finding themselves by chance at the right places at the right times will profit indecently from its onset, and will think of themselves (or will be thought of by others) as very smart; but the time has passed when they could harness, direct, or regulate anything. And their level of understanding of what is going on is, as I pointed out here, pitiful, as it is informed by a bankrupt, self-serving ideology.

One last question might be posed. Is there anything wrong with what I have described, from a normative, possibly ethical, point of view? The simple answer (already provided in A Theory of Language and Mind, over a quarter century ago) is No: as I said, carbon structures have no principled priority over silicon structures. So, if I don’t like what is coming, it is only because I am arbitrarily, stubbornly, provincially attached to my form of life, and hate the idea of it going down the drain. Though I don’t think that what I, or anyone else, hates or is attached to is going to change things at all. We have crossed the Rubicon, guys, and we better take stock of it.

Ermanno Bencivenga is a Distinguished Professor of Philosophy and the Humanities, Emeritus, at the University of California. The author of seventy books in three languages and one hundred scholarly articles, he was the founding editor of the international philosophy journal Topoi (Springer) for thirty years, as well as of the Topoi Library. His most recent book in English is Theories of the Logos (Springer, 2017). His two books on Kant are Kant’s Copernican Revolution (Oxford UP, 1987) and Ethics Vindicated: Kant’s Transcendental Legitimation of Moral Discourse (Oxford UP, 2007).

11

What I am referring to here, as an emblem of the colossal amount of blathering on this topic, is Scott Pelley’s interview of Google CEO Sundar Pichai “Is artificial intelligence advancing too quickly? What AI leaders at Google say”, aired as part of 60 Minutes on CBS on April 16, 2023.

22

Princeton: Princeton University Press, 1990.

33

Published the following year in Advances in Scientific Philosophy, edited by G. Schurz and G. Dorn (Amsterdam: Rodopi, 1991); but read at SUNY Buffalo in February 1990.

44

Berkeley and Los Angeles: University of California Press, 1995.

55

Milano: Mondadori, 1995.

66

Berkeley and Los Angeles: University of California Press, 1997.

77

Lanham, MD: Lexington Books, 2003.

88

Translated by Ben Fowkes (London: Penguin, 1990-1992). Volume I, p. 284.

99

Translated by Alfred Corn.

1010

I introduce and discuss this issue in “L’uomo e/è la scimmia”, part of La filosofia come strumento di liberazione (Milano: Cortina, 2010).

1111

In Being and Nothingness, Sartre describes becoming self-conscious as the surfacing of the Other: an internal parameter of (what else?) consciousness. For me, it is the essential step in the transformation of consciousness from an on-and-off, somewhat useful mechanism into an insidious and predatory one. “Does God see me when I take a shit?” wonder children when they are introduced to the Almighty.

1212

Mountain View’s elevation is about 100 feet. Supposedly it is called that because of the view it allows of the Santa Cruz mountains. Which reminds me of what a friend told me of Mount Pleasant, MI: “It is neither.” I see a metaphor of empty pretense in there, though I am not interested enough to explore it.

1313

The paper was “Meinong: A Critique from the Left,” originally published in Grazer Philosophische Studien 25/26 (1986) and reprinted in my collection Looser Ends: The Practice of Philosophy (Minneapolis, MN: University of Minnesota Press, 1989).

#62

May 2023

Introduction

Is Artificial Intelligence Self-Conscious? Or: How to bark up the wrong tree

by Ermanno Bencivenga

Vital Signs: Capitalism, Aspiration, and Intensity

by Jack Graveney

Existence and Psychosis: Between Ludwig Binswanger and Henri Maldiney

by Turner Roth

Rousseau's Teeter-Tawter: Inmate Reflections on a Prison Mural

by Trent Portigal