Issue #76 October 2024

Is it Morally Permissible to Create AI Androids Merely to Serve us?

Barbara Kasten, Architectural Site 17, 1988

The television show Humans portrays the lives of humans who create, own, and use AI-enhanced androids called ‘synths’ as items of property merely to serve human needs, such as housekeeping, shopping, transportation, and healthcare. Some synths lack phenomenal consciousness. Like the philosopher’s zombie, these humanoids have no experiential features of awareness. They are surprisingly humanlike on the outside, but inside, nobody is home. Yet other synths are conscious and self-aware, demonstrating properties we associate with personhood, such as rationality, autonomy, moral agency, and moral patienthood.1placeholder Apparently, these androids are strong AI machines nearly indistinguishable from humans.2placeholder They don’t exhibit merely functional consciousness but possess qualia (i.e., subjective, qualitative experiences.) Let us call these droids synth-persons to distinguish them from the zombie-like synths.3placeholder

I grant up front that I doubt synth-persons are metaphysically possible: it seems unlikely, if not inconceivable, that AI systems could have qualia. Let us postpone this concern and assume arguendo that synth-persons are metaphysically possible1placeholder and that humans can create synth-persons, thereby introducing them into the actual world. Suppose, moreover, that synth-persons soon become actual. A crucial moral question arises: Would it be morally permissible to create or otherwise use synth-persons merely to serve us?

 

Argument

I will argue that creating or using synth-persons merely to serve human needs is morally impermissible. To start, consider the following passage from Baier, substituting ‘person’ for “human being” and “man”:

“To attribute to a human being a purpose in that sense is not neutral, let alone complimentary: it is offensive. It is degrading for a man to be regarded as merely serving a purpose … Such questions reduce him to the level of a gadget … I imply that we allot to him the tasks, the goals, the aims which he is to pursue; that his wishes and desires and aspirations and purposes are to count for little or nothing. We are treating him, in Kant’s phrase, merely as a means to our ends, not as an end in himself.”5placeholder

Baier articulated this objection against the thesis that God created human beings for a purpose. I doubt this objection succeeds as he intended, yet his point is insightful and applicable to our purposes, given that we are thinking about the godlike power of creating synth-persons.6placeholder Consider Kant’s humanity formulation of the Categorical Imperative (CI), modified to include non-human persons. Every rational being (i.e., person) is an objective end in itself and not merely a means; this fact indicates the following imperative: act in such a way that you always treat humanity (more broadly, persons), whether in your own person or the person of any other, never merely as a means to an end, but also at the same time as an end.7placeholder We can use this formulation to construct an argument against the moral permissibility of creating/using synth-persons as mere means to serve our purposes.

The argument goes as follows. If synth-persons are persons, they have the categorical (i.e., absolute) moral right not to be used as mere means. But given the assumption of this essay, synth-persons are persons; they have self-awareness, rationality, autonomy, and moral agency. Hence, synth-persons have the categorical right not to be used as mere means. However, creating synth-persons only to serve our ends would be a matter of using them as mere means. Thus, creating synth-persons only to serve our ends would violate their categorical rights as persons. But it is absolutely wrong to violate such a right. Therefore, creating synth-persons as mere means to our ends would be absolutely wrong. We should not create a class of robotic persons merely to serve us.

Is there support for this argument from another moral perspective? Consider a case from the tradition of virtue ethics that reinforces the Kantian case. This argument hinges not on rights and duties but virtues. A virtuous person possesses and regularly exercises relevant moral and intellectual virtues called for in given situations. A virtuous person does not exploit anyone but recognizes the inherent value of personhood and treats persons with commensurate respect. Yet creating or using a synth-person as a mere means exploits that person. We ought to be virtuous persons. Thus, we should not exploit any person. So, we should not create or use a synth-person as a mere means since doing so would be a matter of exploitation.

 

Objection and Replies

How might other moral perspectives apply? Note an objection from utilitarianism. In some cases, it might be morally acceptable to create synth-persons to serve humans if doing so generates the greatest utility or benefit for the greatest number of people.8placeholder Since, according to utilitarianism, any action is morally acceptable so long as it generates such a result, creating synth-persons would be justified, too, and even obligatory, assuming we have the means to do so.

For instance, suppose that we have the capacity to produce one thousand synth-persons as servants for our uses, as mere means to our ends, and that doing so would generate great benefits for billions of human persons. The results of this creative project would free us from mindless chores, demeaning labor, etc., and enable us to engage in higher forms of human living, that is, to escape the culture of total work and flourish in a state of leisure, as Pieper (2009) used those terms. We would thus be liberated to practice science, mathematics, philosophy, and the arts, develop new and useful technologies, seek to improve our characters and moral, legal, religious, and educational institutions, and, after five millennia of trying, finally realize the best that civilized human life can offer. Starting with agriculture and cuneiform, we would end with a computer-enabled commonwealth of human flourishing. Utilitarian reasoning suggests that, in such a case, we ought to create the one thousand androids and exploit them as mere means to our ends. In short, we should create the one thousand (droids) to fulfill the five thousand (years).

Wouldn’t such a project violate the rights of the synth-persons? Here, the utilitarian can respond in two ways. First, following Bentham, the utilitarian can deny that moral rights exist.9placeholder Second, the utilitarian can say that although pro tanto moral rights exist, these are not categorical. Thus, it is morally acceptable to override such rights in morally relevant situations, such as the case described in the previous paragraph.

The Kantian might reply by citing common objections to utilitarianism, including the predictability problem (PP) and the harm problem (HP). According to the PP, we cannot reliably predict every short-term and long-term consequence of our actions, and hence, we cannot reliably choose to act in ways that optimize utility. Utilitarianism demands more than we can accomplish. Ought implies can (i.e., if we ought to do x, x must be doable for us). Since we cannot know and cannot adequately predict the consequences of our actions as utilitarianism requires, we cannot do what utilitarianism demands. Therefore, it is not the case that we ought to do so. According to the HP, suppose that harming (e.g., violating the rights of, doing injustice to) one person or a minority of persons is necessary to bring about benefits for the majority. In such cases, utilitarianism requires that we do so. However, our deeply held moral intuitions often conflict with this utilitarian injunction. One of those deeply held intuitions is this: we ought not intentionally violate one’s human rights, regardless of how many people benefit from the violation. So, as the objection goes, there must be a problem with utilitarianism.

The HP raises a Kantian rejoinder to utilitarianism. According to the utilitarian thesis (UT), any action can be made right by its consequences if they maximize utility. In normative terms, on UT, we ought to do whatever maximizes utility, including using individuals as mere instruments for that end. For the Kantian, however, such use is exploitation: treating persons as mere means to generate results that maximize utility. Hence, the Kantian might contend, though it aims at the best interests of human beings in general, utilitarianism permits degrading them as individuals despite its goal of attaining human welfare in the aggregate. In short, utilitarianism permits the exploitation of persons, which violates the CI.

To elaborate, the Kantian might ask us to imagine that, at some point, the number of synth-persons is greater than that of humans, and the synth-persons decide for utilitarian reasons to use the humans as mere means to synth-person ends. Would the utilitarian support this turning of the tables? Would there not be widespread human support for something like the humanity formulation of the CI?

 

Summary So Far

I have provided a Kantian argument that it would be morally wrong to create synth-persons merely to serve human purposes. Yet a familiar and longstanding type of conflict has emerged between Kantian/non-consequentialist and utilitarian/consequentialist intuitions. One might hope this clash is settled before we develop the capacity to create synth-persons, assuming such development is possible.10placeholder For now, the problem is worth thinking about. Perhaps we are morally obligated to do so. And perhaps the exigency of the moral requirement to consider this problem is commensurate with the speed, intensity, and available resources with which tech companies seek to build AGI and ASI systems. Given AI’s rapid development and widespread uses, we seem to be at a technological flash point with serious moral implications, akin to the first wave of the Industrial Revolution. In any case, history suggests that when it comes to using persons as mere servants, our conclusions often eventually judge on the side of Kantian rather than consequentialist intuitions. Time will tell if the same judgment applies to the question of synth-persons.

Barbara Kasten, "Progression 14", (2019)

Are synth-persons possible?

I referred supra to the question of whether synth-persons are metaphysically possible. The debate is open as to whether such is possible. There are arguments to hold that it is impossible. Searle’s Chinese Room argument suggests that strong AI is impossible. No AI system, regardless of its sophistication, can be conscious in a literal sense.11placeholder Arguments about absent qualia and inverted qualia suggest the same conclusion.12placeholder And yet, for all we know, perhaps synth-persons are possible. One reason to support this claim is that conceivability is defeasible evidence for possibility, and it seems we can conceive of synth-persons. The idea of a synth-person does not seem obviously contradictory. After all, we draft stories about them, such as Humans and Blade Runner. However, conceivability is not demonstrative proof. Our conceptions can go wrong. Perhaps synth-persons are strictly impossible and inconceivable. When we tell stories about them, we don’t conceive of synth-persons qua persons but imagine something sufficiently similar to enable good storytelling. As Descartes noted long ago, there is a difference between imagining and conceiving. The capacity to perform one does not entail the ability to do the other. I leave this debate for another time.

Instead, I will argue abductively that the AI-enhanced androids on the horizon are not persons, even if such a thing is metaphysically possible. Let us adopt Boethius’ ontological conception of personhood: a person is an individual substance of a rational nature.

Abductive reasoning involves using relevant and available information to argue to the best explanation for some phenomenon that calls for explanation. One considers the contenders for explanation of the phenomenon at hand and ranks them based on several factors. Here is a rudimentary example. Suppose you wake up one morning and head to the kitchen. You find spilled milk on the floor. The situation needs explaining. How did it happen? The floor was clean before you went to sleep the night before. Milk doesn’t spill inexplicably. Perhaps we shouldn’t cry over it, but we are permitted to seek its reason. You were asleep, and you’re not prone to somnambulation. You weren’t the spiller. Was it your eldest son? Perhaps your youngest son, or your daughter? Was it a racoon or some other critter rummaging through your refrigerator? Was there an earthquake that shook the carton out of the refrigerator and onto the floor, removing the top and releasing the liquid? Your eldest son is off at college one hundred miles away. It is unlikely to have been him. Your daughter doesn’t consume dairy products. She probably didn’t do it. It’s improbable that a wild animal invaded your kitchen, opened the refrigerator door, and spilled the milk. You felt no earthquake, nor was one reported in the news; the earthquake explanation is implausible. Your youngest son made a midnight visit to the kitchen; he likes milk and cookies; sometimes his snacking leaves a mess behind. He probably did the spilling. That is the best explanation among the ones proffered.

Now, in the case under examination, the phenomenon to be explained is androids enhanced by artificial intelligence behaving as (Boethian) persons do, seeming to demonstrate traits such as consciousness, rational agency, and the like. Our question is this: what is the best explanation for this datum?

Let us take two explanatory options: (a) the android is a person; (b) the android is not a person but merely a complex machine that functions as if it were a person. We will evaluate these options based on factors such as simplicity, explanatory scope or comprehensiveness, probability, plausibility, and the absence of ad hoc thinking. Simplicity requires fewer unnecessary features than its competitors. Simpler explanations work most efficiently, avoiding aspects not required for the job as far as possible. Regarding scope, the option should account for a greater number of relevant observations, that is, the option should be more comprehensive in terms of its strength of elucidation. For probability, the phenomenon to be explained is more likely to be the case given the selected option than it would be on the competing options. For plausibility, the option is supported by a larger number of pertinent reasonable beliefs that its competitors. Concerning ad hoc reasoning, the option must involve fewer novel presumptions (i.e., claims constructed and assumed merely for the sake of buttressing the option and thus not supported by independent reasons – or only luckily supported) than its competitor.

Consider the question: what kind of entity is an android enhanced by artificial intelligence that appears to act as persons do? Our competing explanations are (a) and (b). How do they fare if put to the abductive test?

With respect to simplicity, to hold (a) that the android is a person, one arguably must also hold that the android is conscious, self-conscious, a possessor of subjective experiences, and a rational agent with moral understanding, responsibility, and the capacity for choice. However, an android is a machine composed of silicon, plastic, metal, and the like. To assert that the android is a person raises complicated questions about how an aggregate of silicon, etc. can possess the features of personhood noted above. Option (a) is not the simpler explanation, since (b) (the android is not a person but merely a complex machine) functions without such assumptions and thus does not raise those complicated questions.

For example, suppose an AI enhanced android moves, say, by stepping out of the way of an oncoming car. One might claim that the best explanation for this event is that the android believes (i.e., has a mental state of belief) that (i) if the car were to strike it, it would be damaged and (ii) it ought to avoid such damage and (iii) choosing to step out of the way is a feasible method for avoidance. One might then say that the android acts on that triad of beliefs by stepping out of the way.

A simpler explanation is that the human engineer who designed the android believes (i)-(iii) and, based on those beliefs, designed the robot to move in such situations. This explanation is simpler because it only posits the uncontroversial proposition that the human engineer has beliefs and acts on the basis of them – in this case, to design the android appropriately, while avoiding the controversial position that, in addition to the engineer believing (i) – (iii), the android also believes the triad – in both the dispositional and occurrent senses, and acts on the basis of those beliefs, and thus has mental states which it can act upon as a rational agent.

Concerning scope, (a) explains why the android appears to do what persons do, such as communicate in language, solve problems, perform work, and act based on reasonable decisions. After all, persons do such things; so, on the assumption that androids are persons, these factors are aptly explicated. Hence, (a) performs well under the scope factor. However, (b) in principle can explain the same factors, as long as we have the relevant understanding of computer science, robotics, and the like.13placeholder Thus, arguably, (a) and (b) are on roughly equal footing here.

Regarding probability, the phenomenon to be explained is likely if we assume that the android is a person. Nevertheless, it seems the phenomenon is (approximately) equally likely if we assume (b). And for plausibility, arguably, (a) is supported by or consistent with fewer reasonable beliefs than (b). For example, it is reasonable to believe that persons are self-aware, morally responsible, rationally competent beings with the ability to make free choices, construct life plans, and act on practical, moral, and axiological principles that they understand. It is also reasonable to believe that aggregates of silicon and plastic do not have such properties, and that computer engineers can make humanoid robots out of silicon, etc. parts. These reasonable beliefs support (b) more effectively than (a).

Lastly, consider ad hoc thinking. If (a) is the account, one would need to explain how a computerized humanoid has the features of personhood. This is a complicated explicative task. One might accomplish it by introducing arguably ad hoc factors into the explanation, such as that personhood does not require subjective experience, or that to be a person is just to perform the external actions that persons perform, or that personhood is a supervenient property which emerges if specific materials are present and arranged in appropriate though mysterious ways. Option (b) does not require such assumptions. On (b), we can explain the computerized humanoid by appealing to the purposes and designs of the engineers who created the robots.

Given our abductive examination, it seems that option (b) is best. Of our five categories – simplicity, scope, probability, plausibility, and the ad hoc – (b) wins on simplicity, plausibility, and ad hoc thinking. For scope and probability, (a) and (b) are approximately equivalent. The honor goes to (b).

 

Legal Ramifications

AI persons might not be possible, but AI androids are on the scene. What sort of laws should be enacted to manage the ramifications of AI androids? For example, should we posit laws prohibiting their development? Given that we lack certainty about whether or not AI androids are persons, and we lack certainty about which moral theory correctly explains our obligations, and further, given the moral significance of mistreating persons, perhaps we should either not create such androids unless we can obtain practical assurance that they are not persons, or if we create them under epistemic conditions of uncertainty, we should grant them legal personhood, which would give them legal rights and responsibilities, protecting them from exploitation and holding them legally responsible for wrongdoing.

Suppose we create the androids without granting their legal rights, but they in fact are persons possessing moral rights; in this case, we risk mistreating them, which would be a serious wrong. If we create them without granting their legal rights, and they lack moral rights, then we wouldn’t mistreat them. But how would we know that they lack moral rights? Now, if we create the robots and grant them legal rights, and yet as non-persons they lack corresponding moral rights, we risk treating them incorrectly, thinking of them as if they were persons, perhaps at the expense of treating humans in a morally impermissible manner. What if we create the droids and grant their legal rights, and they have moral rights? In this case, we would treat them rightly as persons, even if we are uncertain that they are persons. No option escapes problems. Yet it seems that, if we were to create the androids, a virtuous caution would require that we grant their legal rights in a way that would not mistreat humans.

 

Conceptual Disruption

AI technology is new, and new technology can disrupt our use of concepts, thus introducing confusion into culture and its standards of communication. According to Marchiori and Scharp (2024, Introduction), in the newly emerging literature on conceptual disruption, a conceptual disruption is a challenge or obstacle to the normal course of ideational activity. The literature indicates three kinds of such disruption: gaps; overlaps, and misalignments. A conceptual gap occurs when no concept exists which can be applied to a novel phenomenon. A conceptual overlap holds when multiple incompatible concepts apply to the same item. A conceptual misalignment is constituted by conceptual activity not aligned with our norms and values.

The creation of AI androids likely would cause conceptual disruption, thereby changing our culture and language in various ways, some of which might be undesirable. Consider conceptual gaps and overlaps. The creation of AI androids arguably involves the generation of a new entity in human history. No clear and precise concept exists with corresponding terms to refer to such referents, thereby producing a conceptual gap and motivating what I will call a reverse conceptual overlap, which occurs when the same concept is applied to multiple entities. In this case, since there is no pre-existing concept and term for AI androids, folks might be inclined to think of such entities as persons and refer to them as ‘persons,’ hence generating a reverse conceptual overlap insofar as the same concept and term, namely ‘person’, is used for persons (traditionally understood) and non-personal androids. Such a situation could lead to confusions in the realms of culture, language, thought, morality, and law. This is a rich topic in the philosophy of technology that is open for exploration, particularly concerning the areas of conceptual analysis and conceptual engineering.

 

Conclusion

Is it morally permissible to create AI androids merely to serve human purposes? The short answer is that it depends on whether such androids are persons. If they are, Kantian considerations advise against their creation, though utilitarian intuitions might support it. I have provided reasons for suspecting that AI persons are not metaphysically possible. Yet even if they are possible, there is an abductive argument against the claim that AI androids built in the future would be persons. Yet given the uncertainties of this topic, wisdom counsels us to proceed carefully, both in our technological production and our philosophical investigation. To that end, I welcome critical responses to this essay, since my contribution only scratches the surface of what might be said.

Elliott R. Crozat holds a Full-time Faculty position in Philosophy and the Humanities at Purdue University Global. His areas of emphasis are ethics, epistemology, and philosophy of religion.

Works Cited

Kurt Baier. 1981. “The Meaning of Life.” In E.D. Klemke (ed.), The Meaning of Life. New York: Oxford University Press.

Jeremy Bentham. “Anarchical Fallacies.” In The Works of Jeremy Bentham. Volume 2. Available at: https://oll.libertyfund.org/titles/bowring-the-works-of-jeremy-bentham-vol-2

Larry Hauser. Chinese Room Argument. Internet Encyclopedia of Philosophy. Available at: https://iep.utm.edu/chinese-room-argument

Immanuel Kant. 1964. Groundwork for the Metaphysics of Morals. H.J. Paton (tr.). New York: Harper and Row.

Amy Kind. Qualia. Internet Encyclopedia of Philosophy. Available at: https://iep.utm.edu/qualia/

Samuela Marchiori and Kevin Scharp. 2024. What is conceptual disruption? Ethics and Information Technology. Volume 26, article number 18.

Thaddeus Metz. 2013. “Could God’s Purpose Be the Source of Life’s Meaning?” In Seachris (ed.), Exploring the Meaning of Life: An Anthology and Guide. Oxford: Wiley-Blackwell.

Derek Parfit. 2011. On What Matters. Volume One. Oxford: Oxford University Press.

Josef Pieper. 2009. Leisure: The Basis of Culture. San Francisco: Ignatius Press.

11

We can distinguish between metaphysical and moral conceptions of ‘person.’ The early medieval Roman philosopher Boethius provided a metaphysical definition: a person is an individual substance of a rational nature, i.e., roughly, a concrete particular entity that possesses rationality as an essential property.  We might think of the moral conception in terms of a being that has and is owed moral duties or at least one that is owed such obligations.

22

Philosophers distinguish between two theses regarding whether an AI machine can think, understand, and have self-awareness and other mental states: weak AI and strong AI. Roughly, the former holds that AI machines can function as if they were intelligent, understanding, aware, etc., but are not actually such; the latter holds that AI machines can be actually intelligent, etc., even if they are not yet such.

33

An AI-enhanced computer or vehicle with such features could also be a synth-person even though it lacks a humanlike mechanical body.

44

To use the modal language of possible worlds, there is some possible world in which a synth-person exists.

55

See Metz (2013), p. 205. Metz quotes Baier, p. 104.

66

Metz (2013) (p. 206) responds to Baier, which I think successfully neutralizes Baier’s objection. In short, a divine “purpose” for human life might be understood as a divine request for humans freely to choose to be morally good persons rather than a divine assignment or order forced upon us as if we were mere tools in a divine workshop.

77

See Kant (1964), pp. 95-96.

88

Utility/benefit might be understood in terms of (i) maximizing the combination of pleasure and the absence of pain for as many as possible or (ii) maximizing desire-satisfaction for as many as possible. The former conception might be called hedonistic utilitarianism, and the latter non-hedonistic utilitarianism. Both emphasize the generation of optimific consequences. Each defines ‘optimific’ differently.

99

See the well-known passage in Anarchical Fallacies in which Bentham denies the existence of natural rights.

1010

Parfit (2011) has attempted this task. He argues that Kantian and utilitarian ethics are consistent if we modify the universalizability formulation of the CI: we ought to follow moral principles that are universally acceptable in the sense that everyone could rationally will them. This Kantian axiom indicates that we should follow principles that, if followed, would be optimific in terms of consequences. But we should do so for Kantian motives (i.e., for the sake of duty; because doing so is right as such) and not because, as the consequentialist might say, the end justifies the means. With this modification of the CI, acts are morally wrong if they violate a moral principle that is optimific, universally willable, and not reasonably rejectable (Vol. 1, p. 413).

1212

See “Qualia and Functionalism” in Kind, Qualia (https://iep.utm.edu/qualia/)

1313

We seem to lack some understanding in these disciplines. For example, the black box problem indicates that even computer scientists and engineers don’t quite understand what happens between information inputs and outputs in AI systems. But if the black box problem were solved, it would not be an obstacle to explaining why AI androids do what they do.

#76

October 2024

Introduction

Transgressing the Taboo: A Comparative Analysis of Bataille’s and Freud’s Theoretical Approaches

by Tung-Wei Ko

Is it Morally Permissible to Create AI Androids Merely to Serve us?

by Elliott R. Crozat

Diverse Thoughts on the Lightly Enlightened, circa 17th Century France, Part V

by Trent Portigal

Untimely Contributions and Uncanny Meditations on the Philosophy of Brazilian Jiu-Jitsu

by Will Johnson