Issue #65 September 2023

Heidegger’s Bots: The Birth and Death of Responsible Artificial Intelligence

Solon H. Borglum, Untitled, (n.d.)

“There is no time machine except the human being.”

–Georgi Gospodinov, Time Shelter


It is 3 a.m. A middle-aged woman scrolls through WebMD, biting her lip as she recognizes symptoms that have plagued her for months. She feels thirsty and fatigued. Most nights, she wakes up in the middle of the night to use the bathroom. Her vision is blurry. After an hour of anxious scrolling, she resolves to see a doctor.

The next morning, a nurse ushers her into an examination room and inputs her symptoms into a conmputer model. The algorithm orders and analyzes some tests, then confirms the patient’s suspicions: she is suffering from type 1 diabetes. This hypothetical example of artificial intelligence in medicine seems promising at first glance. The process is efficient: doctors oversee fast, automated care and intervene only in the most difficult cases. The patient relies on medical professionals, and they turn to technology for the boring details.

Can we trust artificial intelligence with our lives? The growing prevalence of AI makes the question urgent. For now, doctors still make the calls that matter. Widespread medical treatment by bots is only theoretical – for now. But algorithms already drive cars. They translate asylum petitions in immigration court and dispatch police to emergencies with little human oversight. Artificial intelligence has enticed us with the promise of lower costs and faster service with an almost-human touch, in health care and beyond. It also presents a clear danger. We have outsourced grave responsibilities to algorithms that are incapable of responsibility.

The philosopher John Haugeland explained why we should not trust software with life-and-death decisions: “Computers don’t give a damn.”1placeholder They interpret our world as a meaningless collection of data points, not as a dynamic reality. They are always stuck in the present moment, with no sense of time. Worst of all, they cannot understand that all existence is finite. Things run out: energy, natural resources, human lives. Computers are blind to the time-bound, connected lives that we experience from birth to death. Even the most advanced models are dangerously indifferent to our world.

Human intelligence consists precisely in trying to get life-and-death matters right. Our knowledge is rooted in the awareness that we will die, that things end. Martin Heidegger, the thinker who most inspired Haugeland, warned that technology also deceives us. The magic of automation leads us to believe that the world’s resources are endless. Death can be indefinitely delayed, it seems. Nature cannot hold us back. We start to think like our models.

People still give a damn – for now. Can we make our AI models responsible in the same way? We can, if we strengthen the capacity of AI to heed the boundaries of finitude, time, and responsiveness to a complex world. I will show that the work of Heidegger and Haugeland can guide the way toward models that are more responsible in this way. AI has evolved in response to our changing ideas about intelligence, spurred by philosophical critique and practical needs alike. With enough time, we might make computers give a damn.

 

What Does a Bot Know?

Humans and computers know about the world in fundamentally different ways. Our knowledge seeks out the boundaries imposed by the people and things around us, because they create the conditions for our existence. Understanding the world enables us to fulfill our possibilities in life. Unlike us, computers are disinterested. They have no possibilities to fulfill. Instead, they follow logical sequences of action that programmers have scripted for them. Our concern for the future compels us to get the world right, so we are intelligent. AI does not care one way or the other, so it is not.

Haugeland describes how human self-understanding forces us to respond to the surrounding world with intelligence in his essay, “Truth and Finitude”:

“My self-understanding is my ability-to-be who I am—the skillful know-how that enables me to project myself onto my own possibilities (as a teacher, for instance) and, in those terms, to live my life. But, if my self-understanding depends on my understanding of the being of other entities, then I must also be able to project those entities onto their possibilities. This ability, therefore, belongs essentially to my ability-to-be me. My ability to project those entities onto their possibilities is not merely another possibility onto which I project myself but is rather part of my ability to project myself onto my own possibilities at all. In other words, my self-understanding literally incorporates an understanding of the being of other entities.”2placeholder

When Haugeland refers to entities, understanding, projection, and possibilities, he is speaking Heidegger’s sometimes-cryptic language describing what it is like to be human in Being and Time, his unfinished major work. According to Heidegger, the world first appears to us as a web of inextricably connected entities – people, places, and things – that already have meaning when we come on the scene. We come to know them primarily through their possibilities in a concrete situation. Every entity is for something.

Heidegger begins his examination of human existence in Being and Time with a famous passage about a hammer, known as the ‘tool analysis’. A hammer is useful for hammering things. I know the hammer because I realize that it can be used this way. In the Haugeland-Heidegger parlance, I am “projecting it onto its possibilities”. If I pick it up, I gain a better understanding of it, and my knowledge deepens if I use the hammer to attach leather soles to a pair of shoes. If I then sell the shoes to someone who needs them, I am really making headway (philosophically speaking). I am connecting the hammer to the whole world that gives it meaning. Understood through these connections to future possibilities, the hammer becomes ready-to-hand in Heidegger’s terminology.3placeholder If this seems simplistic, that is the point. Human knowledge depends first and foremost on the handiness of things and their everyday connections, not on fussy theoretical analysis. We can only make sense of things as part of a whole world.

Rational, scientific knowledge is inferior to ready-to-hand awareness of the world in Heidegger’s view. Computers ‘know’ things in this second-hand, rational way. An algorithm evaluates the hammer conceptually, through criteria that can be stored in a database. The hammer stands on its own, isolated from everything else in the world. It should have a handle and a solid head. It must have a certain distribution of mass. Heidegger calls this theoretical relation to entities present-at-hand.4placeholder To place the hammer in its context using present-at-hand thinking, computers would need a vast array of conceptual information about nails, leather soles, supply chains, and consumer psychology. Even then, Heidegger contends, we cannot recreate the ready-to-hand grip we have on the world purely through rational knowledge.

As humans, we constantly wrestle with how the world fits together. We project other entities, including people, onto their possibilities as our basic way of getting by. We try to figure out how things in the world connect together. If I am driving and see someone walking through an intersection that is typically empty, for instance, I pay closer attention. That pedestrian immediately becomes ready-to-hand for me. I know that they mean something in my world, and their possibilities immediately change my situation. I apply the brakes. In this way, the world places demands on me. Even if I kept my foot on the gas pedal, I would be keenly (and horribly) aware of the consequences that my action would bring down on myself and the pedestrian. Heidegger’s concept of the ready-to-hand reveals the phenomenology of being human. To exist means to inhabit a world that matters to us as a world.

Haugeland calls our capacity to understand and react to the significance of entities in the world “ontological responsibility”.5placeholder This translates more directly: human intelligence is about getting things right. We harm our own possibilities when we get things wrong, when we fail to see salient connections between things. The burden of understanding the significance of the world follows us from birth. By contrast, ontological responsibility eludes the grasp of computers and their theoretical knowledge of the world. They can only identify present-at-hand objects in isolation. They do not care whether they get things right.

The chain of connections involved in ontological responsibility may seem endless. If hammers lead to nails, leather, footwear, and entire economies, where does it all stop? Heidegger’s answer is stark: “Death is the possibility of the impossibility” of existence.6placeholder If we get things wrong, we die. The end of the chain is our existence. Yet we are thrown into a world that already has meaning at our birth. Our lives are filled with things to get right. As a consequence, we are not free to do whatever we want. Other entities in the world determine what we can conceivably be. Finitude defines our existence and our responsibility: objects break, resources run out, other people die. I will die.

If intelligence is grounded in ontological responsibility, as Haugeland argues, then algorithms cannot be intelligent. Computer models may ‘know’ (compute) medical data or an airplane’s available fuel, but they do not worry about finitude in the same way that we do. The source of their next hour of electricity is a matter of total indifference to them. Computers do not contemplate the possibility that someone will switch them off if they malfunction. Their way of being is not existential like ours and lacks any connection to death – ours or theirs. Technology invites us to imagine a world where scarcity and death no longer exist, a future where life is not a high-stakes game of getting things right. But the fact remains that our possibilities will end. Death and finitude condition all existence, and the irreversible march of time lies at the center of it all.

 

The Time of a Bot’s Life

No one ‘lives in the moment’ according to Heidegger’s reckoning. Intelligent beings stretch out in time, caught between birth and death: we live between. Death reveals one aspect of our ecstatic (stretched out) temporal form. We also understand our birth and history in light of what we can be. Time is another aspect of intelligence that distinguishes us from computer models – as well as animals, plants, and other forms of life. Understanding this crucial point from Being and Time will further illustrate why models are not yet responsible to us or our world.

A baby’s first, halting interactions with caretakers, pets, toys, pets, and food sketch out a world of possibilities extending into her future. Human intelligence develops bit by bit, in large part through trial and error. Each failure and success adds detail to the picture, uncovering the connections among entities that form our world. History traces out each person’s unique trajectory from before birth to death. For Heidegger, our temporal way of being compels us to be responsible. We are intelligent only when we act in full awareness of the limits imposed by our history on our future.

Computer programs, by contrast, are always ‘born yesterday’. Advanced neural network models like ChatGPT are trained on many lives’ worth of information, consuming our history as if the internet were an enormous bildungsroman about humanity. They are not thrown into a world as we are, however, because they cannot see the meaningful connections in this history. In fact, they cannot effectively differentiate between recent information and a source text that is years out of date. ChatGPT takes in human meaning as raw data, which it chops up and recombines into plausible combinations of words, oblivious to their significance.

Models communicate primarily through conventional patterns and clichés. This makes them seem human, but the similarity to our form of intelligence is only superficial. We speak and think in these patterns, it is true. They help us respond to an interconnected world, embedded in a past, present, and future all our own.7placeholder But computers have no world or history. The rinse-and-repeat, disposable existence of computers makes them sophisticated goldfish, as software engineer Allen Pike describes:

“[ChatGPT] will […] forget anything you try to teach it. It will forget that you live in Canada. It will forget that you have kids. It will forget that you hate booking things on Wednesdays and please stop suggesting Wednesdays for things, damnit. If neither of you has mentioned your name in a while, it’ll forget that too.

Talk to a ChatGPT-powered character for a little while, and you can start to feel like you are kind of bonding with it, getting somewhere really cool. Sometimes it gets a little confused, but that happens to people too. But eventually, the fact it has no medium-term memory becomes clear, and the illusion shatters.”

AI developers know that their models need longer memories to be effective. The software behind Pike’s “goldfish” conversations, released in 2020, preserves just 3,000 words of context before it starts to forget. The version released in 2023, GPT-4, stores about 24,000 words. These enhancements have improved the reasoning of large language models on tasks like translating written instructions into actions and answering multi-part questions.8placeholder Greater stores of historical data will not endow computers with a world, but they chip away at the problem. The goldfish have become sophisticated parrots, and the parrots could evolve to exhibit thinking like ours.

Intelligence requires a deeper sense of time than goldfish, parrots, or chatbots display. Heidegger’s famous account of time seeks to uncover what he calls originary temporality. Everyday time, the sequence of one ‘now’ after another ticking by, derives from a more primordial form of temporality that cannot be gleaned from a clock. The derivative form of time that computers store in databases is a misleading, present-at-hand ‘thing’. When we think about time in this way, as a mental object, we might believe that what we experience can be reduced to mere minutes and seconds. Yet we know from the behavior of AI models that this purely analytical time is meaningless. It does not explain the urgency that we feel about the future, or the boredom and anxiety that we experience in the present. Heidegger’s analysis of originary temporality and clock time anticipates the frustrations that Pike describes from ‘talking’ to ChatGPT.9placeholder

Computers are stuck in a meaningless now. GPT-4’s longer context window represents only an incremental step toward real time-awareness for AI models. Heidegger’s analysis extends far beyond the sketch that I have offered here. Simplified, pragmatic appropriations of his ideas are still useful, however. His explorations of our intelligence have influenced the development of simulated intelligence for decades. Philosophical critique clarifies problems, little by little, that first seem intractable. An exploration of the history of AI will demonstrate that the same gradual approach could address the problem of time in AI.

Solon H. Borglum, Horse Study, (n.d.)

How Artificial Intelligence Became Heideggerian

Computer scientists have tried to make software capable of understanding our world since the beginnings of AI in the 1950s. Researchers initially thought that modeling human intelligence was conceptually simple. The proposal for a 1956 Dartmouth conference that inaugurated the field of artificial intelligence declared that “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”10placeholder

‘Thinking’ machines displayed very little intelligence, as it happened. Early models ignored the messy interconnections that make objects ready-to-hand. Good Old-Fashioned AI, or GOFAI as Haugeland pejoratively termed it, worked on the assumption that the world could be broken down into present-at-hand components and fed into a computer. The idea was that human intelligence could be fabricated from software functions that processed simplified representations of our world. Developers would create a schematic of reality, reduced to a bare minimum of detail. The computer could then analyze simple objects in this environment, manipulate them, and the resulting intelligence would be impressively human-like – or so the researchers expected.

Early AI models were little more than toys, however. SHRDLU, a program created by MIT researcher Terry Winograd, illustrates just how simple GOFAI’s present-at-hand ‘worlds’ were. The program manipulated collections of colored blocks, connected only by spatial relationships. The computers (and programmers) of the day could not handle any more complexity than that.11placeholder

Figure 1. SHRDLU's "intelligence" in action.

The philosopher Hubert Dreyfus, who was Haugeland’s teacher, inserted himself into this research program – as a gadfly and critic. Dreyfus questioned the basic assumptions of GOFAI, beginning with a report entitled “Alchemy and Artificial Intelligence”. A generation of computer scientists and philosophers built on his approach, which he outlined in books like What Computers Can’t Do. His opinions were not universally welcomed, especially in the early days at MIT. An antagonistic report from the late 1960s includes sections titled “What Is Dreyfus’ Problem?” and “Computers Can’t Play Chess…Nor Can Dreyfus”.12placeholder

The philosophical foundations of GOFAI were rotten, according to Dreyfus. AI researchers had uncritically borrowed their ideas about how the world worked from the tradition that Heidegger sought to overturn with the tool analysis from Being and Time. Computers could not mimic our thought processes by abstracting away the linkages between things that make the world significant. Rational, present-at-hand thinking about objects led to a blind alley.

Dreyfus’s critique remained an academic pursuit with few practical successes through the 1990s. In a retrospective article from 2007 titled “Why Heideggerian AI failed”, he admitted that it had yielded few exciting breakthroughs. Early Dreyfus-inspired models could emulate insect cognition and play games, but they fell far short of intelligence.13placeholder Rodney Brooks, one adherent of this approach, left MIT in 2008 to develop the Roomba robotic vacuum, a practical (and lucrative) result of the Heideggerian paradigm. These automated appliances were no more ontologically responsible than GOFAI, however. The key to unlocking the power of Dreyfus’s critique turned out to be mountains of information and advanced computers capable of processing them: the Big Data revolution of the 2000s.

Figure 2. One of Rodney Brooks’s “Heideggerian” insect robots. Photo by Kenneth Lu.

Algorithms can now manage millions of shallow connections among objects thanks to advances in storage and processing power. Modern AI better emulates our understanding of ready-to-hand objects as a result. Brian Cantwell Smith, a Canadian philosopher working in the same tradition of AI critique as Haugeland, describes this tectonic shift in his book, The Promise of AI.14placeholder Models can successfully grapple with a fuzzy, interrelated world. The capacity to process significant connections among objects enables present-day models to better imitate our holistic thinking. They no longer have to abstract away the complexity that makes the world meaningful for humans.

Driverless cars provide a clear example of how holistic, ‘Heideggerian’ processing enables intelligent action. The GOFAI approach to driving would require preprogrammed rules to recognize every possible situation requiring a responsible decision: a stray deer in the road, fallen power lines, lane closures due to construction, not to mention unexpected pedestrians getting in the way. Today’s autonomous vehicles are not explicitly programmed for every such eventuality. They instead employ simulated neural networks akin to human brains. AI driving models react to connections in their environments, features of objects like edges, points, and sides that suggest a response is required. Just as I might swerve to avoid an orange blur without first thinking, “That is a traffic cone,” an autonomous car can avoid an obstacle without matching it to a present-at-hand object in a database.15placeholder Big Data enables practical, Heideggerian AI.

Artificial intelligence appears responsible in some cases, as when driverless cars brake for obstacles. It may be an illusion, but it is a useful one. Greater simulated responsibility keeps us safer. These recent improvements required substantial leaps in computer performance, theoretical critiques from philosophers (and computer scientists), and most importantly – generations of steady improvement. In other words, AI programs have evolved to become safer, because we chose and refined the technological approaches that best responded to our needs. Models are still more automatons than intelligent agents, but philosophical critique and selective improvement together have led to better AI. Continued evolution can help us get closer to ontologically responsible software, even if we cannot see a precise solution from where we stand today.

 

The Magician’s Wand of Artificial Selection

Will responsible AI models simply wake up with the capacity to understand the life-and-death stakes of their choices, rather than being programmed to do so? By adopting automation so broadly, we are wagering, rightly or wrongly, that they will become responsible – or else we will suffer the results. Optimism about automation is matched by growing fear as models invade medicine, aviation, finance, and other domains. We worry that we are incapable of designing intelligence using silicon chips and software code. We cannot shake the conviction that we are special somehow, the only beings on earth that can be ontologically responsible.

The evolution of human intelligence from bits of protein was no more likely, or straightforward, than the evolution of digital intelligence seems to us now, however. No scientific theory explains where our remarkable way of thinking comes from. We do know that competition from other species and the changing demands of their environment drove the tool-making Homo habilis to stand up, Homo erectus to develop larger brains and better tools, and so on down to philosophers and biologists with keyboards and DNA sequencers. In the distant past, even more obscure processes made living organisms from chance combinations of proteins. How can we design artificial, intelligent life if we do not adequately understand the origins of life and human intelligence?

We have a powerful tool at our disposal to create better models: artificial selection. In On the Origin of Species, Charles Darwin called artificial selection “the magician’s magic wand, by means of which he may summon into life whatever form and mould he pleases.”16placeholder His case for natural selection rested in large part on what he saw before his eyes in the English countryside. Breeders constantly created new attributes, even species, with a keen eye and patient husbandry. We now know that Darwin’s analogy between the natural world and artificial breeding extends well beyond biology. Selection, both natural and artificial, creates lasting change throughout our world. The fundamental theorem used by biologists to describe how natural selection works, the Price equation, applies equally to economics, physics, and other fields.17placeholder Organisms are far from the only thing that evolves.

Complex software evolves, too. It is not simply designed. When engineers reuse and modify successful code, software displays patterns like biological evolution, over much briefer periods of time. One study has demonstrated that the software powering Android phones and other technologies, has reached the complexity of bacterial genomes (in two decades, while nature took millions of years).18placeholder In AI, driverless cars and GPT are the descendants of earlier programs authored by developers with varying approaches, agendas, and inspirations – but no common design. No single developer created them from a blank sheet of paper. The best models, defined by their usefulness or interest to human users, persist and form the basis of further generations of software. What would we need to coax artificial intelligence from artificial selection?

Developers will need a keen eye to select the features that could make AI responsible. Darwin himself insisted that breeding was a rare talent: “Few would readily believe in the natural capacity and years of practice requisite to become even a skilful pigeon-fancier.”19placeholder Fortunately, software developers have already proven equal to the challenge of artificial selection. Our models leapt from simple lab experiments to driving cars and (soon) diagnosing diseases in the span of sixty-odd years since the Dartmouth Artificial Intelligence Conference. Lab experiments like SHRDLU gave way to technology that at least pretends to give a damn about complex threats to our safety. Philosophy has aided computer scientists in these advances, which have resulted in software that is better aligned with our needs, if not yet ontologically responsible.

Intelligence and responsibility cannot be designed into AI models. No grand plan of nature made our own remarkable species inevitable, either. We do not yet understand our complex faculties of thought, which emerged over a vast timescale through natural selection and other biological processes. We cannot reverse-engineer the results of human evolution and transplant them into software. Computational intelligence must be allowed to emerge through a process of artificial selection, informed by our best understanding of ontological responsibility. I have suggested a few components of intelligence that we could select for in the future: coping with interconnected complexity, recognizing that things run out and people die, and connecting future possibilities to their historical conditions.

Left to our own devices – figurative and literal – we seem intent on outsourcing life-and-death decisions to unaccountable models. We should try to make them more responsible, or we will have to live with the actions of computers that don’t give a damn. Our own responsibility to the world emerged through evolution. This time-tested path could help us select models equipped with the tools needed for artificial intelligence: holistic thinking, understanding finitude, and memory. With time, we may succeed.

Chris Tessone is an M.A. student in continental philosophy at Staffordshire University. His daughters think all philosophers have ‘H’ names, because he reads mostly Heidegger, Hegel, and Henri Bergson.

Works Cited

Darwin, Charles. On the Origin of Species. 1st ed. London: John Murray, 1859. http://archive.org/details/onoriginspeciesf00darw.

Dreyfus, Hubert L. “Alchemy and Artificial Intelligence.” RAND Corporation, January 1, 1965. https://www.rand.org/pubs/papers/P3244.html.

Dreyfus, Hubert L. What Computers Can’t Do: A Critique of Artificial Reason. New York: Harper & Row, 1978.

Dreyfus, Hubert L. “Why Heideggerian AI Failed and How Fixing It Would Require Making It More Heideggerian.” Artificial Intelligence, Special Review Issue, 171, no. 18 (December 2007): 1137–60. https://doi.org/10.1016/j.artint.2007.10.012.

Durt, Christoph, Tom Froese, and Thomas Fuchs. “Against AI Understanding and Sentience: Large Language Models, Meaning, and the Patterns of Human Language Use.” Preprint, March 2023. http://philsci-archive.pitt.edu/21983/.

Haugeland, John. “Truth and Finitude.” In Dasein Disclosed: John Haugeland’s Heidegger, edited by Joseph Rouse, 187–220. Cambridge, MA and London: Harvard University Press, 2013.

Haugeland, John. “Understanding Natural Language.” In Having Thought: Essays in the Metaphysics of Mind, 47–61. Cambridge, MA and London: Harvard University Press, 1998.

Heidegger, Martin. Being and Time. Translated by John Macquarrie and Edward Robinson. Oxford and Cambridge, MA: Blackwell, 1962.

Luque, Victor J. “One Equation to Rule Them All: A Philosophical Analysis of the Price Equation.” Biology & Philosophy 32, no. 1 (2017): 97–125. https://doi.org/10.1007/s10539-016-9538-y.

Moor, James. “The Dartmouth College Artificial Intelligence Conference: The Next Fifty Years.” AI Magazine 27, no. 4 (2006). https://ojs.aaai.org/aimagazine/index.php/aimagazine/article/view/2064.

Ondruš, Ján, Eduard Kolla, Peter Vertaľ, and Željko Šarić. “How Do Autonomous Cars Work?” Transportation Research Procedia, LOGI 2019 – Horizons of Autonomous Mobility in Europe, 44 (January 1, 2020): 226–33. https://doi.org/10.1016/j.trpro.2020.02.049.

Pang, Tin Yau, and Sergei Maslov. “Universal Distribution of Component Frequencies in Biological and Technological Systems.” Proceedings of the National Academy of Sciences 110, no. 15 (April 9, 2013): 6235–39. https://doi.org/10.1073/pnas.1217795110.

Papert, Seymour. “The Artificial Intelligence of Hubert L. Dreyfus: A Budget of Fallacies.” Massachusetts Institute of Technology – Project MAC, January 1968. https://dspace.mit.edu/bitstream/handle/1721.1/6084/AIM154.pdf?sequence=2&isAllowed=y.

Smith, Brian Cantwell. The Promise of Artificial Intelligence: Reckoning and Judgment. Cambridge, MA and London: The MIT Press, 2019.

Wei, Jason, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V. Le, and Denny Zhou. “Chain-of-Thought Prompting Elicits Reasoning in Large Language Models.” In NeurIPS, 2022. https://openreview.net/forum?id=_VjQlMeSB_J.

Winograd, Terry. “Procedures as a Representation for Data in a Computer Program for Understanding Natural Language.” Massachusetts Institute of Technology – Project MAC, n.d. https://hci.stanford.edu/~winograd/shrdlu/AITR-235.pdf.

11

Haugeland, “Understanding Natural Language,” 47.

22

Haugeland, “Truth and Finitude,” 203.

33

Heidegger, Being and Time, 98.

44

Heidegger, Being and Time, 99ff.

55

Haugeland, “Truth and Finitude,” 200–201.

66

Heidegger, Being and Time, 294.

77

In Being and Time, Heidegger calls these patterned ways of speaking and thinking ‘idle talk’. See Durt et al., “Against AI Understanding and Sentience,” for an engaging discussion of the resemblance between AI model ‘language’ and conventional human speech. Their paper draws on Heidegger and cognitive science to make the case that large language models are not sentient.

88

Wei et al., “Chain-of-Thought Prompting Elicits Reasoning in Large Language Models,” 2022.

99

Heidegger, Being and Time, 426–27.

1010

Moor, “The Dartmouth Artificial Intelligence Conference: The Next Fifty Years,” 87.

1111

Winograd, “Procedures as a Representation for Data in a Computer Program for Understanding Natural Language,” 45.

1212

Papert, “The Artificial Intelligence of Hubert L. Dreyfus,” 1.

1313

Dreyfus, “Why Heideggerian AI failed,” 1140–1146.

1414

See Smith, The Promise of AI, 49 (sidebar).

1515

Ondruš et al., “How Do Autonomous Cars Work,” 229–32.

1616

Darwin, On the Origin of Species, 31. Darwin was quoting the English naturalist William Youatt.

1717

Luque, “One equation to rule them all,” 107.

1818

Pang and Maslov, “Universal distribution of component frequencies in biological and technological systems.”

1919

Darwin, On the Origin of Species, 32.

#65

September 2023

Introduction

What is "any" life? Delimitations of the biographical in "It’s Such a Beautiful Day" (2012)

by Timofei Gerber

Heidegger’s Bots: The Birth and Death of Responsible Artificial Intelligence

by Chris Tessone

Section 23: The Berkeleian Unconscious; Marx

by Raphael Chim

Schelling's "First Outline of System of the Philosophy of Nature", (Introduction)

Video