This is a revised version from something I posted several years ago. I was inspired by to return to this topic after dipping into Erik J. Larson’s The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do (Belknap Press of Harvard University Press, 2021).
I do not think that it is true or fair to say that “artificial intelligence” (hereafter, AI) replicates human cognitive abilities, although it may “replicate” in a very attenuated or simply analogical sense, one aspect or feature of one particular cognitive ability, even then, in as much as human cognitive abilities do not function in isolation, in other words, as they work more or less in tandem and within a larger cognitive (and affective, etc.) human context, the replication that takes place in this case is not in any way emulative of human intelligence as such. AI is not about emulating or instantiating (peculiarly) human intelligence, but rather a technological replication of aspects of formal logic that can be mathematized (such as algorithms), thus it is a mistake to describe the process here as one of “automated reasoning” (i.e., AI machines don’t ‘reason,’ they compute and/or process), if only because our best philosophical and psychological conceptions of rationality and reasoning cast a net—as the later Hilary Putnam often reminded us—far wider than anything that can, in fact or principle, be scientized, logicized, or mathematized (i.e., formalized).
Moreover, AI only occurs as the result of the (creative, collaborative, managerial …) work of human designers, programmers, and so on, meaning that it does not in any real sense add up to the construction of “autonomous” (as that predicate is used in moral philosophy and moral psychology) machines or robots, although perhaps we could view this as a stipulative description or metaphor, albeit one still parasitic (and to that extent misleading) on conceptions of human autonomy. And insofar as a replication is in reference to a copy, in this case the copy is not an exact replica of the original. We should therefore be critically attentive to the metaphors and analogies (but especially the former) used in discussions of AI: “learning,” “representation,” “intelligence” (the adjective ‘artificial’ thus deserving more descriptive and normative semantic power), “autonomous,” “mind,” “brain,” and so forth; these often serve, even if unintentionally, to blur distinctions and boundaries, muddle our thinking, create conceptual confusion, obscure reality, and evade the truth. Often extravagant claims are made (by writers in popular science, scientists themselves, mass media ‘experts’ and pundits, philosophers, corporate spokespersons or executives, venture capital and capital investors generally, and so forth) to the effect that AI computers or machines possess powers uncannily “like us,” that is, they function in ways that, heretofore at least, were demonstrably distinctive of human (and sometimes simply animal) powers and capacities, the prerogative, as it were, and for better and worse, of human beings, of persons, of personal normative agency.
Recent attempts to articulate something called “algorithmic accountability,” meaning a concern motivated by the recognition that data selection and (so to speak) algorithmic processing often encode “politics,” psychological biases or prejudices, or stereotypical judgments, are important, but attributions of accountability and responsibility, be they individual or shared (in this case, almost invariably the latter), can only be human, not “algorithmic” in the first instance, hence that notion lacks any meaningful moral or legal sense unless it is a shorthand reference to the human beings responsible for producing or programming the algorithms in the first place.
A fair amount of the philosophical, legal, and scientific (including computer science) literature—both its questions and propositions—on AI (‘artificial intelligence’), including robots, “autonomous artificial agents, and “smart machines,” is replete with implausible presuppositions and assumptions, as well as question-begging premises such the arguments can be characterized by such terms as implausibility, incoherence, and even “nonsense” (or failure to ‘make sense’). In short, the conceptual claims upon which its statements and premises depend, that which is at the very heart of its arguments, often make no sense, that is to say, they “fail to express something meaningful.”
Sometimes even the very title of a book will alert us to such nonsense, as in the case of a volume edited by Wendell Wallach and Colin Allen, Moral Machines: Teaching Robots Right from Wrong (Oxford University Press, 2009). The book title begs the question, first with the predicate “moral,” and correlatively, with the phrase, “teaching robots right from wrong,” which depends upon concepts heretofore never applied outside human animals or persons (we can speak of ‘teaching’ at least some kinds of animals, but it makes no sense to speak of teaching them ‘right from wrong’), and thus it is eminently arguable as to whether or not we can truly “teach” robots anything, let alone a basic moral orientations, in the way, say, that we teach our children, our students, or each other, whether in informal or formal settings. The novelty of the claim, as such, is not what is at issue, although radical and unprecedented manner in which it employs concepts and words should provoke presumptive doubts as to whether or not our authors have a clear and plausible picture of what it means for “us” to “be moral,” what it typically means for us to teach someone “right from wrong,” or how someone learns this fundamental moral distinction. We might employ such words in a stipulative or even theoretical sense, for specific and thus very limited purposes that are parasitic on conventional or lexical meaning(s), or perhaps simply analogical or metaphorical at bottom, but even in those cases, one risks misleading others by implying or evoking eminently questionable or arguable presuppositions or assumptions that make, in the end, for more or less conceptual confusion if not implausibility.
According to our editors, respectively a consultant and writer affiliated with Yale’s Interdisciplinary Center for Bioethics and a Professor of History and Philosophy of Science and of Cognitive Science, “today’s [computer] systems are approaching a level of complexity … that requires the systems to make moral decisions—to be programmed with ‘ethical subroutines’ to borrow a phrase from Star Trek” (the blurring of the lines between contemporary science and science fiction, or the belief that much that was once science fiction on this score is no longer fiction but the very marrow of science itself, is commonplace). This argument depends, I would argue, on a rather implausible model of what it means for us to make “moral decisions,” as well as on an incoherent or question-begging application of the predicate “ethical.” Wallach and Allen open the Introduction with the breathless statement that scientists at the Affective Computing Laboratory at the Massachusetts Institute of Technology (MIT) “are designing computers that can read human emotions,” as if this is a foregone conclusion awaiting technical development or completion. Human beings are not always adept at reading emotions insofar as we can hide them or simply “fake it,” as it were. In any case, as I argue below, a machine cannot understand what constitutes a human emotion. The expression of emotions in persons is in fact an incredibly complex experience, involving both outward and inner dimensions (some of which are cognitive), biographical history, relational contexts and so forth, all of which are part, in principle or theory, of an organic whole, that is, the person. In the words of P.M.S. Hacker,
“Emotions and moods are the pulse of the human spirit. They are both determinants and expressions of our temperament and character. They are tokens of our engagement with the world and with our fellow human beings. [….] [T]he emotions are also perspicuously connected with what it, or is thought to be, good and bad. Our emotional pronenesses and liabilities are partly constitutive of our temperament and personality. Our ability to control our emotions, to keep their manifestations and their motivating force within the bounds of reason, is constitutive of our character as moral agents. So the investigation of the emotions is a fruitful prolegomenon to the philosophical study of morality. It provides a point of access to the elucidation of right and wrong, good and evil, virtue and vice, that skirts the morass of deontological and consequentialist approaches to ethics without neglecting the roles of duties and obligations, or the role of the consequences of our actions in our practical reasoning.”
Let’s consider in a rough and preliminary matter, the role of emotions from the vantage point of human nature or philosophical anthropology. This should help us appreciate the extravagance or implausibility of the claim that AI machines soon will, in fact, or might, in principle, “read human emotions.” Here I rely upon P.M.S. Hacker’s third volume in his tetralogy on human nature as seen from the vantage point of philosophical anthropology, The Passions: A Study of Human Nature (2018).
In particular, I’d like to briefly address the claim that AI systems can—or soon will—“read human emotions.” By way of tilling the ground for our discussion, it is not an insignificant fact that, in the words of Hacker, “[t]he deepest students of the role of emotions in human life are the novelists, dramatists, and poets of our culture” (Hacker confines his examination of ‘the passions’ from the vantage point of philosophical anthropology to Western civilization). A virtually identical point has been made by Jon Elster in his book, Alchemies of the Mind: Rationality and the Emotions (1999):
“… [W]ith respect to an important subset of the emotions we can learn more from moralists, novelists, and playwrights than from the cumulative findings of scientific psychology. These emotions include regret, relief, hope, disappointment, shame, guilt, pridefulness, pride, hybris, envy, jealousy, malice, pity, indignation, wrath, hatred, contempt, joy, grief, and romantic love. By contrast, the scientific study of the emotions can teach us a great deal about anger, fear, disgust, parental love, and sexual desire (if we count the last two as emotions). [….] I believe…that prescientific insights into the emotions are not simply superseded by modern psychology [here Elster means largely what we would call ‘scientific psychology’] in the way that natural philosophy has been superseded by physics. Some men and women in the past have been superb students of human nature, with more wide-ranging personal experience, better powers of observation, and deeper intuitions than almost any psychologist I can think of. This is only what we should expect: There is no reason why one century out of twenty-five should have a privilege in wisdom and understanding. In the case of physics, this argument does not apply.”
I would amend Elster’s account by recognizing that insofar as psychoanalytic psychology is a “subjective science,” it fares far better with regard to instructing us about the role of our emotions in daily life than the “scientific psychology” that dominates the contemporary academic world. Finally, I would further qualify Elster’s remarks with the following from Hacker:
“The constitutive complexity of human emotions, their diverse relation to time, to knowledge and belief of a neurologically uncircumscribable scope, to reasons and the evaluation of reasons, to somatic and expressive perturbations, to motivation and decision, guarantee that there can be no simple correlation [let alone causation!] between genetic, physiological, or neural facts and an emotion [this comment is made with regard to the efforts of developmental and evolutionary psychologists as well as cognitive neuroscientists to identify a class of absolutely basic (‘natural kinds’ if you will) human emotions].”
In short, we can conclude that science does not and will not provide us with our best or most accurate knowledge and understanding of human emotions. One fundamental reason this is the case is the fact that emotions often “exhibit” what Hacker defines as “both compositional complexity and contextual or narrative complexity:”
“Compositional complexity is patent in the manner in which emotions may involve cognitive and cogitative strands (perception, knowledge, belief, judgment, imagination, evaluation, and thought); sensations and perturbations; forms of facial, tonal, and behavioral manifestation; or emotionally charged utterances that express one’s feelings; reasons and motives for action; and intentionality and causality. The contextual complexity is manifest in the manner in which emotions, in all their temporal diversity, are woven into the tapestry of life. An emotional episode is rendered intelligible not only by reference to a past history—to previous relationships and commitments, to past deeds and encounters, and to antecedent emotional states. The loss of temper over a triviality may be made comprehensible by reference to long-standing, but suppressed, jealousy. One’s Schadenfreude (delight at the misfortune of another) or by reference to one’s standing resentment at an insult. The intensity of one’s grief may be explained by reference to the passion with which one loved. [….] For the most part, understanding the emotions, as opposed to explaining their cortical and physiological roots, is idiographic rather than nomothetic, and historical rather than static.”
This suggests that the notion that AI systems or robots “reading emotions” is quite implausible if not impossible (I happen to think the latter), given the manner in which emotions are “woven into the tapestry of life.” Some might respond by asserting, more plausibly (and after the work of the evolutionary psychologist Paul Ekman on ‘facial recognition technology’), that what is being “read” here are simply facial expressions and perhaps bodily comportment. But assuming that is true, it’s still doubtful if only because even episodic or “temporary” emotions “have characteristic multiple associations, manifestations, and forms of expression” both within and across cultures (and these are not static), together with the fact that we can conceal our emotions by, say, pretending to feel an emotion one does not feel and thus mimic emotions and emotional expressions. Moreover, and perhaps more importantly, the “facial manifestations of emotions occur in a context that gives them meaning.” Hence facial recognition software alone will not suffice to “read” our emotions, if only because, as Hacker writes,
“[o]ur emotions are made evident not only by our countenance and voice, but also by our posture and mien [all of which can be mimed and mimicked by a decent actor, an adept criminal, a dishonest person or a ‘drama queen’], the way we walk or sit, our gestures and gesticulations. So called body language, sermo corporis, as Cicero dubbed it, is rich and variegated, with natural behavioural roots and cultural modifications, constraints, refinements, and inventions. [….] Throughout recorded history, posture and deportment were refined and constrained in order to differentiate the aristocracy from the demos or plebs, imperial rulers from the ruled, and men from women. Natural gestures and gesticulations of anger, defiance, triumph, submission, grief, awe, and wonder were, from one period to another, subject to various forms of social modification and restraint to mark out the superior from the inferior, the cultivated from the uncouth.”
Thus,
“[f]acial expression, inarticulate vocal expression, gesture, and mien constitute collectively an orchestra of possible behaviourable manifestations and expressions of agitations, of the perturbations of temporary emotions, of enduring emotions, of moods, and of emotional attitudes. In addition there are wholly conventional behavioral signals by means of which we express our feelings. These include nodding or shaking one’s head, thumbs up or down, pointing with index finger or—rudely—with thumb, winking, beckoning, waving, and rude and obscene gestures of rejections, mockery, and insult. Couple them with the articulate verbal expressions of agitation, emotion, mood and attitude; the tone and speed of utterance; and the volume of voice in which one speaks … and we have a veritable symphony for the manifestation and expression of affections in general and of emotions in particular. The orchestra is normally conducted in honest concord. The various forms of discord are often marks of insincerity, which, for the unaccustomed, is difficult to make. [….] One can wear a veil but, when one doesn’t, one’s features are revealed. That one can sometimes conceal one’s feelings does not imply that, when does not, it is not the very feelings themselves that are manifest—even though anger is not shaking one’s fist and crying is not sadness.”
Hacker explains how the “behaviour of others, in all its diversity and complexity, in a context that renders it intelligible, constitutes the logical criteria for ascribing emotions to them.” These multifarious logical criteria are not available to an AI machine or robot. Furthermore, our emotions are not simply inferred from the behavioural criteria we observe, as behaviour provides the (non-formal) logical and non-inductive ground for ascription of an emotion, and such criteria are defeasible in part because “there is a degree of opacity and sometimes even a form of constitutional indeterminacy about the emotions and their manifestation.” This “interpersonal opacity” is more frequent and pronounced in cross-cultural encounters. In any case, the opacity and indeterminacy of the emotions (or, put differently, their ideographic character), whatever their depth and authenticity or the motives they give rise to, can make for mutual misunderstanding between two people who love each other, or between two close friends who know each other well:
“There need be no disagreement between them over the facts of their relationship—but one interprets the manifold nuances of behavior and attitude one way, and the other another way. There may be no additional data to resolve the misunderstanding—all the facts are given. One person makes a pattern of their emotional life one way, the other person another way. There need be no further ‘fact of the matter.’”
This vividly illustrates, I think, the wild implausibility if not nonsense ensconced in the belief that AI machines or robots can, or in the near future, “read emotions.” The uniqueness of human nature and the role of emotions as part and parcel of the human condition, singled out here in terms of “the penumbra of opacity and indeterminacy surrounding the application of concepts of the emotions,” is an urgent reminder that
“there is such a thing as better and worse judgment about the emotions of others. Greater sensitivity to fine shades of behaviour is conducive to more refined insight into their hearts. Wide knowledge of mankind and openness to what people tell of themselves make for better judgment. If one knows a person well, one is more likely to be able to render his responses and reactions intelligible than if one were a mere acquaintance [or an AI machine!]. One may learn to look, and come to see what others pass over. One may become sensitive to imponderable evidence, to subtleties of glance, facial expression, gesture, and tone of voice. One will then not have a better ‘theory of the emotions’ than others: one will have become a connoisseur of the emotions.”
The capacity for and power of judgment is distinctively human and thus forever beyond the reach of AI. Only a (human) person, not a robot, has the potential to one day become “a connoisseur of the emotions.”
Philosophy of Mind & Consciousness—Selected Readings
- Bennett, M.R. and P.M.S. Hacker. Philosophical Foundations of Neuroscience. Malden, MA: Blackwell, 2003.
- Bennett, Maxwell, Daniel Dennett, Peter Hacker, John Searle, and Daniel Robinson. Neuroscience and Philosophy: Brain, Mind and Language. New York: Columbia University Press, 2007. (I find the arguments of Bennett, Hacker, and Robinson more sound and persuasive than those of Dennett and Searle.)
- Bilgrami, Akeel. Self-knowledge and Resentment. Cambridge, MA: Harvard University Press, 2012.
- Descombes, Vincent (Stephen Adam Schwartz, trans.) The Mind’s Provisions: A Critique of Cognitivism. Princeton, NJ: Princeton University Press, 2001.
- Dreyfus, Hubert L. What Computers Still Can’t Do: A Critique of Artificial Reason. Cambridge, MA: MIT Press, revised ed., 1992 (1979).
- Dreyfus, Hubert L. and Stuart E. Dreyfus. Mind over Machine. New York: Free Press, 1986.
- Finkelstein, David H. Expression and the Inner. Cambridge, MA: Harvard University Press, 2003.
- Ganeri, Jonardon. The Self: Naturalism, Consciousness, and the First-Person Stance. Oxford, UK: Oxford University Press, 2012.
- Gillett, Grant. Subjectivity and Being Somebody: Human Identity and Neuroethics. Exeter, UK: Imprint Academic, 2008.
- Gillett, Grant. The Mind and Its Discontents. New York: Oxford University Press, 2nd ed., 2009.
- Hacker, P.M.S. Human Nature: The Categorial Framework. Malden, MA: Blackwell, 2007.
- Hacker, P.M.S. The Intellectual Powers: A Study of Human Nature. Malden, MA: Wiley-Blackwell, 2013.
- Hacker, P.M.S. The Passions: A Study of Human Nature. Hoboken, NJ: John Wiley & Sons, 2018.
- Hagberg, Garry L. Describing Ourselves: Wittgenstein and Autobiographical Consciousness. New York: Oxford University Press, 2008.
- Hodgson, David. The Mind Matters: Consciousness and Choice in a Quantum World. New York: Oxford University Press, 1991.
- Horst, Steven. Beyond Reduction: Philosophy of Mind and Post-Reductionist Philosophy of Science. Oxford, UK: Oxford University Press, 2007.
- Hutto, Daniel D. The Presence of Mind. Amsterdam: John Benjamins, 1999.
- Hutto, Daniel D. Beyond Physicalism. Amsterdam: John Benjamins, 2000.
- Hutto, Daniel D. Folk Psychological Narratives: The Sociocultural Basis of Understanding. Cambridge, MA: MIT Press, 2008.
- Pardo, Michael S. and Dennis Patterson. Minds, Brains, and Law: The Conceptual Foundations of Law and Neuroscience. New York: Oxford University Press, 2013.
- Patterson, Dennis and Michael S. Pardo, eds. Philosophical Foundations of Law and Neuroscience. New York: Oxford University Press, 2016.
- Radoilska, Lubomira, ed. Autonomy and Mental Disorder. New York: Oxford University Press, 2012.
- Rorty, Amélie Oksenberg. Mind in Action: Essays in the Philosophy of Mind. Boston, MA: Beacon Press, 1988.
- Smith, Christian. What Is a Person? Chicago, IL: University of Chicago Press, 2010.
- Sprigge, T.L.S. (Leemon B. McHenry, ed.) The Importance of Subjectivity: Selected Essays on Metaphysics and Ethics. New York: Oxford University Press, 2011.
- Tallis, Raymond. The Explicit Animal: A Defence of Human Consciousness. New York: St. Martin’s Press, 1999 ed.
- Tallis, Raymond. I Am: An Inquiry into First-Person Being. Edinburgh: Edinburgh University Press, 2004.
- Tallis, Raymond. The Knowing Animal: A Philosophical Inquiry into Knowledge and Truth. Edinburgh: Edinburgh University Press, 2004.
- Tallis, Raymond. Aping Mankind: Neuromania, Darwinitis and the Misrepresentation of Humanity. Durham, England: Acumen, 2011.
- Wollheim, Richard. The Mind and Its Depths. Cambridge, MA: Harvard University Press, 1993.
Some relevant bibliographies:
- The Emotions
- Ethical Perspectives on the Sciences and Technology
- Human Nature and Personal Identity
- Psychoanalytic Psychology and Therapy
- Sullied (Natural & Social) Sciences
I also recommend familiarizing oneself with the literature on “autonomy” in philosophy, moral psychology, and political philosophy, as well as the notion of “human dignity” in philosophy and jurisprudence. I have made this post available for viewing or download at my Academia page.
Comments