Introduction
I address a few of the arguments of Turing out of respect for his intellectual brilliance, as conveyed in this introduction.
“Alan (Mathison) Turing (23 June 1912 – 7 June 1954) was an English mathematician, computer scientist, logician, cryptanalyst, philosopher, and theoretical biologist. Turing was highly influential in the development of theoretical computer science, providing a formalisation of the concepts of algorithm and computation with the Turing machine, which can be considered a model of a general-purpose computer. He is widely considered to be the father of theoretical computer science and artificial intelligence.
* * *
“Alan Turing did not fit easily with any of the intellectual movements of his time, aesthetic, technocratic or marxist. In the 1950s, commentators struggled to find discreet words to categorise him: as ‘a scientific Shelley,’ as possessing great ‘moral integrity.’ Until the 1970s, the reality of his life was unmentionable. He is still hard to place within twentieth-century thought. He exalted the science that according to existentialists had robbed life of meaning. The most original figure, the most insistent on personal freedom, he held originality and will to be susceptible to mechanisation. The mind of Alan Turing continues to be an enigma.
But it is an enigma to which the twenty-first century seems increasingly drawn. The year of his centenary, 2012, witnessed numerous conferences, publications, and cultural events in his honor. Some reasons for this explosion of interest are obvious. One is that the question of the power and limitations of computation now arises in virtually every sphere of human activity. Another is that issues of sexual orientation have taken on a new importance in modern democracies. More subtly, the interdisciplinary breadth of Turing's work is now better appreciated. A landmark of the centenary period was the publication of Alan Turing, his work and impact (eds. Cooper and van Leeuwen, 2013), which made available almost all aspects of Turing's scientific oeuvre, with a wealth of modern commentary. In this new climate, fresh attention has been paid to Turing's lesser-known work, and new light shed upon his achievements. He has emerged from obscurity to become one of the most intensely studied figures in modern science.”
* * *
“Throughout his life, Alan Turing’s fearless approach to daunting problems helped him break new conceptual ground. From his time at Cambridge, when he published papers now recognised as the foundation of computer science, through his vital work at Bletchley Park cracking German codes – shortening the Second World War by years – to his exploration of the notion of artificial intelligence and his fascination with the application of mathematics to the biological world. At The [Alan Turing] Institute we aim to adopt a similarly ground-breaking, multi-faceted approach to our research. Despite being a singular genius, Turing was also a great collaborator, both with the hundreds of women and men at Bletchley Park, and throughout his career working with other mathematicians, engineers and scientists.
The biography Alan Turing: The Enigma by Andrew Hodges includes the following quote from Turing, which sums up the spirit in which the Institute operates: ‘The isolated man does not develop any intellectual power. It is necessary for him to be immersed in an environment of other[s]…. The search for new techniques must be regarded as carried out by the human community as a whole, rather than by individuals.’
Turing’s life was tragically affected by the societal norms of his time: despite his pivotal part in ensuring the safety of the nation and saving countless lives, his homosexuality resulted in him being defined as a security risk, and he was harassed by police surveillance up until his untimely death in 1954. Though we now live in a more progressive and open society, at the Institute we recognise the importance of actively ensuring anyone in ‘the human community’ can contribute effectively to changing the world through data science. We do this through our commitment to equality, diversity and inclusion, demonstrated by events such as ‘Gamechangers for diversity in STEM.’
On Turing’s influence on the modern world of data science Vinton Cerf, Chief Internet Evangelist for Google, says: ‘His practical realisations of computing engines shed bright light on the feasibility of purposeful computing and lit the way towards the computing rich environment we find in the 21st Century.’ Our programme in Data Science at Scale continues this legacy, identifying the ways in which computers and algorithms can be better designed to fulfil a huge range of purposes and tasks. And our Research Engineering team, which likes to think of itself as an echo of the Bletchley Park ‘Hut 8’ group led by Turing, helps the Institute develop practical data science tools.
The mathematical foundations strand of our Data-centric Engineering programme also recognises that delivering reliable and robust data science solutions requires rigorous theoretical research and practices. It’s a notion which aligns well with the ‘from first principles’ approach Turing often adopted in his work.
Turing’s revolutionary ideas in cryptography were developed in service of public safety and security, and the Institute’s programme in Defence and Security is continuing this purpose. For example, we have multiple projects looking at ways to store sensitive data, such as health records, in the cloud, in a way that not only allows the data to remain encrypted, but also makes them accessible to publicly beneficial research, without compromising anyone’s privacy.”
* * *
I do not think that it is true or fair to say that “artificial intelligence” (hereafter, AI) replicates human cognitive abilities, although it may “replicate” in a very attenuated or simply analogical sense, one aspect or feature of one particular cognitive ability, formal logical reasoning, and even then, in as much as human cognitive abilities do not function in isolation, in other words, as they work more or less in tandem and within a larger cognitive (and affective, etc.) human context, the replication that takes place in this case is not in any way emulative of human intelligence as such. AI is not about emulating or instantiating (peculiarly) human intelligence, but rather a technological replication of aspects of formal logic that can be mathematized (such as algorithms), thus it is a mistake to describe the process here as one of “automated reasoning” (i.e., AI machines don’t ‘reason,’ they compute and/or process), if only because our best philosophical and psychological conceptions of rationality and reasoning cast a net—as the later Hilary Putnam often reminded us—far wider than anything that can, in fact or principle, be scientized, logicized, or mathematized (i.e., formalized).
Moreover, AI only occurs as the result of the (creative, collaborative, managerial …) work of human designers, programmers, and so on, meaning that it does not in any real sense add up to the construction of “autonomous” (as that predicate is used in moral philosophy and moral psychology) machines or robots, although perhaps we could view this as a stipulative description or metaphor, albeit one still parasitic (and to that extent misleading) on conceptions of human autonomy. And insofar as a replication is in reference to a copy, in this case the copy is not an exact replica of the original. We should therefore be critically attentive to the metaphors and analogies (but especially the former) used in discussions of AI: “learning,” “representation,” “intelligence” (the adjective ‘artificial’ thus deserving more descriptive and normative semantic power), “autonomous,” “mind,” “brain,” and so forth; these often serve, even if unintentionally, to blur distinctions and boundaries, muddle our thinking, create conceptual confusion, obscure reality, and evade the truth. Often extravagant claims are made (by writers in popular science, scientists themselves, mass media ‘experts’ and pundits, philosophers, corporate spokespersons or executives, venture capital and capital investors generally, and so forth) to the effect that AI computers or machines possess powers uncannily “like us,” that is, they function in ways that, heretofore at least, were demonstrably distinctive of human (and sometimes simply animal) powers and capacities, the prerogative, as it were, and for better and worse, of human beings, of persons, of personal normative agency.
Recent attempts to articulate something called “algorithmic accountability,” meaning a concern motivated by the recognition that data selection and (so to speak) algorithmic processing often encode “politics,” psychological biases or prejudices, or stereotypical judgments, are important, but attributions of accountability and responsibility, be they individual or shared (in this case, almost invariably the latter), can only be human, not “algorithmic” in the first instance, hence that notion lacks any meaningful moral or legal sense unless it is a shorthand reference to the human beings responsible for producing or programming the algorithms in the first place.
A fair amount of the philosophical, legal, and scientific (including computer science) literature—both its questions and propositions—on AI (‘artificial intelligence’), including robots, “autonomous artificial agents, and “smart machines,” is replete with implausible presuppositions and assumptions, as well as question-begging premises such the arguments can be characterized by such terms as implausibility, incoherence, and even “nonsense” (or failure to ‘make sense’). In short, the conceptual claims upon which its statements and premises depend, that which is at the very heart of its arguments, often make no sense, that is to say, they “fail to express something meaningful.”
Sometimes even the very title of a book will alert us to such nonsense, as in the case of a volume edited by Wendell Wallach and Colin Allen, Moral Machines: Teaching Robots Right from Wrong (Oxford University Press, 2009). The book title begs the question, first with the predicate “moral,” and correlatively, with the phrase, “teaching robots right from wrong,” which depends upon concepts heretofore never applied outside human animals or persons (we can speak of ‘teaching’ at least some kinds of animals, but it makes no sense to speak of teaching them ‘right from wrong’), and thus it is eminently arguable as to whether or not we can truly “teach” robots anything, let alone a basic moral orientations, in the way, say, that we teach our children, our students, or each other, whether in informal or formal settings. The novelty of the claim, as such, is not what is at issue, although radical and unprecedented manner in which it employs concepts and words should provoke presumptive doubts as to whether or not our authors have a clear and plausible picture of what it means for “us” to “be moral,” what it typically means for us to teach someone “right from wrong,” or how someone learns this fundamental moral distinction. We might employ such words in a stipulative or even theoretical sense, for specific and thus very limited purposes that are parasitic on conventional or lexical meaning(s), or perhaps simply analogical or metaphorical at bottom, but even in those cases, one risks misleading others by implying or evoking eminently questionable or arguable presuppositions or assumptions that make, in the end, for more or less conceptual confusion if not implausibility.
According to our editors, respectively a consultant and writer affiliated with Yale’s Interdisciplinary Center for Bioethics and a Professor of History and Philosophy of Science and of Cognitive Science, “today’s [computer] systems are approaching a level of complexity … that requires the systems to make moral decisions—to be programmed with ‘ethical subroutines’ to borrow a phrase from Star Trek” (the blurring of the lines between contemporary science and science fiction, or the belief that much that was once science fiction on this score is no longer fiction but the very marrow of science itself, is commonplace). This argument depends, I would argue, on a rather implausible model of what it means for us to make “moral decisions,” as well as on an incoherent or question-begging application of the predicate “ethical.” Wallach and Allen open the Introduction with the breathless statement that scientists at the Affective Computing Laboratory at the Massachusetts Institute of Technology (MIT) “are designing computers that can read human emotions,” as if this is a foregone conclusion awaiting technical development or completion. Human beings are not always adept at reading emotions insofar as we can hide them or simply “fake it,” as it were. In any case, as I will later argue, a machine cannot understand what constitutes a human emotion. For now, an assertion will have to suffice: The expression of emotions in persons is in fact an incredibly complex experience, involving both outward and inner dimensions (some of which are cognitive), biographical history, relational contexts and so forth, all of which are part, in principle or theory, of an organic whole, that is, the person. In the words of P.M.S. Hacker,
“Emotions and moods are the pulse of the human spirit. They are both determinants and expressions of our temperament and character. They are tokens of our engagement with the world and with our fellow human beings. [….] [T]he emotions are also perspicuously connected with what it, or is thought to be, good and bad. Our emotional pronenesses and liabilities are partly constitutive of our temperament and personality. Our ability to control our emotions, to keep their manifestations and their motivating force within the bounds of reason, is constitutive of our character as moral agents. So the investigation of the emotions is a fruitful prolegomenon to the philosophical study of morality. It provides a point of access to the elucidation of right and wrong, good and evil, virtue and vice, that skirts the morass of deontological and consequentialist approaches to ethics without neglecting the roles of duties and obligations, or the role of the consequences of our actions in our practical reasoning.”
Last year I argued these points about the putative possibility of computers reading emotions in further detail but as this post is long enough, I will leave it at that and move on to other things.
Consider these respective definitions of “learning” and “pedagogy:”
(i) “Learning is the process [the human experience] of acquiring new understanding, knowledge, behaviors, skills, values, attitudes, and preferences.”
(ii) “Pedagogy, most commonly understood as the approach to teaching, is the theory and practice of learning, and how this process influences, and is influenced by, the social, political and psychological development of learners.”
Assuming the above definitions and characterizations are roughly—or in the main—correct and true, what does it mean to say that machines are capable of “learning?” How do machines learn? In what ways is that the same as, resemble, or mimic human learning (I’m leaving out nonhuman animals for now)?
When Alan Turing said “The whole thinking process is still rather mysterious to us, but I believe that the attempt to make a thinking machine will help us greatly in finding out how we think ourselves,” it is not clear precisely what he meant, but one thing we can say at this point, is that the development of “artificial intelligence” (AI) has quickened our appreciation of how much thinking, intelligence, and learning among human beings is radically different than what we observe or have achieved with AI in computers and robots. Indeed, whatever “learning” takes place with computers is wholly dependent in the first instance on computer programmers, and the so-called learning programs they develop are not at all similar to the way we learn (which, at bottom, is based on experience, on consciousness, and a mind, none of which are properties of a computer). Consider, if you will, this story from an article by the philosopher Sebastian Sunday Grève in Aeon, “AI’s First Philosopher,” which motivated me to address a few of Turing’s more philosophical ideas and arguments he shared and summarized:
“Due to his true scientific interests in the development of computing technology, Turing had quickly become frustrated by the ongoing engineering work at the National Physical Laboratory, which was not only slow due to poor organisation but also vastly less ambitious in terms of speed and storage capacity than he wanted it to be. In mid-1947, he requested a 12-month leave of absence. The laboratory’s director, Charles Darwin (grandson of the Charles Darwin), supported this, and the request was granted. In a letter from July that year, Darwin described Turing’s reasons as follows:
‘He wants to extend his work on the machine still further towards the biological side. I can best describe it by saying that hitherto the machine has been planned for work equivalent to that of the lower parts of the brain, and he wants to see how much a machine can do for the higher ones; for example, could a machine be made that could learn by experience?’
While provocative, the question trades on conceptual confusion or ignorance about the nature of human experience: simply put, machines cannot and never will have experiences. That is, if you will, a metaphysical or ontological fact (and Turing deliberately if unsuccessfully avoided explicitly addressing such topics).
In a 1948 paper by Turing that received widespread attention only much later, he proclaims that “analogy with the human brain is used as a guiding principle.” This prescription was taken to heart within the computers sciences and the field of AI, as analogies, direct and indirect, arising or derived from connectionist (neural networks, etc.) approaches to AI (neural networks, have been combined with earlier and now more elaborate or sophisticated deductive logical forms mathematical algorithms in the field. One ironic consequence of this approach is that while a model of the human brain (in both an analogical and metaphorical sense) was to be a “guiding principle” for AI research, what became commonplace in such fields as psychology, neuroscience, and philosophy was talk and theoretical models that speak of the mind or the brain as like a—or even some sort of—computer, effectively reversing the relevant similarities and correspondences! In a later lecture Turing recognizes and effectively endorses this new and radically reductionist picture or model:
“If any machine can appropriately be described as a brain, then any digital computer can be so described … If it is accepted that real brains … are a sort of machine it will follow that our digital computer, suitably programmed, will behave like a brain.”
Be careful, “if…” does a lot of work here. Are “real brains” (let alone minds, which are not brains), in his words, “a sort of machine?” Yet Turing has a tendency to later qualify or deflate his more extravagant hopes and dreams, as evidenced in the following:
“‘The fact is,’ he went on to explain, ‘that we know very little about it [how to program a machine to behave like a brain], and very little research has yet been done.’ He adds: ‘I will only say this: that I believe the process should bear a close relation [to] that of teaching.’”
And now we come back to the parts played by learning and pedagogy among human beings, in which case the field of AI is not involved in practices that clearly involve a “relation to teaching,” let alone a “close relation.”
Here is where Grève’s article is revealing:
“… [A] fresh look at the 1950 paper shows that Turing’s aim clearly went beyond merely defining thinking (or intelligence) – contrary to the way in which philosophers such as Searle have tended to read him – or merely operationalising the concept, as computer scientists have often understood him. In particular, contra Searle and his ilk, Turing was clearly aware that a machine’s doing well in the imitation game is neither a necessary nor a sufficient criterion for thinking or intelligence. This is how he explains the similar test that he presents in the radio discussion:
‘You might call it a test to see whether the machine thinks, but it would be better to avoid begging the question, and say that the machines that pass are (let’s say) ‘Grade A’ machines … My suggestion is just that this is the question we should discuss. It’s not the same as ‘Do machines think,’ but it seems near enough for our present purpose, and raises much the same difficulties.’
This passage, along with Turing’s other writings and public speeches on the philosophy of AI (including all those described above), has received little attention. However, taken together, these writings provide a clear picture of what his primary goal was in formulating the imitation game. For instance, they show that, from 1947 onwards (and perhaps earlier), in pursuit of the same general goal, Turing in fact proposed not one but many tests for comparing humans and machines. These tests concerned learning, thinking and intelligence, and could be applied to various smaller and bigger tasks, including simple problem-solving, games such as chess and Go, as well as general conversation. But his primary goal was never merely to define or operationalise any of these things. Rather, it was always more fundamental and progressive in nature: namely, to prepare the conceptual ground, carefully and rigorously in the manner of the mathematical philosopher that he was, on which future computing technology could be successfully conceived, first by scientists and engineers, and later by policymakers and society at large.
It is widely overlooked that perhaps the most important forerunner of the imitation game is found in the short final section of Turing’s long-unpublished AI research paper of 1948, under the heading ‘Intelligence as an Emotional Concept.’ This section makes it quite obvious that the central purpose of introducing a test such as the imitation game is to clear away misunderstandings that our ordinary concepts and the ordinary use we make of them are otherwise likely to produce. As Turing explains:
‘The extent to which we regard something as behaving in an intelligent manner is determined as much by our own state of mind and training as by the properties of the object under consideration. If we are able to explain and predict its behaviour or if there seems to be little underlying plan, we have little temptation to imagine intelligence.’
We want our scientific judgment as to whether something is intelligent or not to be objective, at least to the extent that our judgment will not depend on our own state of mind; for instance, on whether we are able to explain the relevant behaviour or whether we perhaps fear the possibility of intelligence in a given case. For this reason – as he also explained in each of the three radio broadcasts and in his 1950 paper – Turing proposed ways of eliminating the emotional components of our ordinary concepts. In the 1948 paper, he wrote:
‘It is possible to do a little experiment on these lines, even at the present stage of knowledge. It is not difficult to devise a paper machine [i.e., a program written on paper] which will play a not very bad game of chess. Now get three men as subjects for the experiment A, B, C. A and C are to be rather poor chess players, B is the operator who works the paper machine. (In order that he should be able to work it fairly fast it is advisable that he be both mathematician and chess player.) Two rooms are used with some arrangement for communicating moves, and a game is played between C and either A or the paper machine. C may find it quite difficult to tell which he is playing.’
It is true that, in addition to his conceptual work, Turing advanced numerous philosophical arguments to defend the possibility of machine intelligence, anticipating – and, arguably, refuting – all of the most influential objections (from the Lucas-Penrose argument to Hubert Dreyfus to consciousness). But that is markedly different from providing metaphysical arguments in favour of the existence of machine intelligence, which Turing emphatically refused to do.”
So, while it is undeniable that “Turing advanced numerous philosophical arguments to defend the possibility of machine intelligence,” those arguments must be assessed with plausible if not sound and persuasive arguments explaining just what constitutes human intelligence, arguments which are found in the literature, thus I am confident that it’s been decisively demonstrated that whatever “machine intelligence” is, it is in many relevant respects quite different from human intelligence, even if the latter often, in our world, draws upon the former, recalling that the former, in the first and last instance, is in fact wholly dependent on the latter.
Finally, Grève writes that “Turing advanced numerous philosophical arguments to defend the possibility of machine intelligence, anticipating – and, arguably, refuting – all of the most influential objections.” I doubt it is even “arguable” (which does not rule out fresh arguments) indeed, I don’t think it is true that the most influential objections have been refuted, for those objections and corresponding arguments rely on conceptions of intelligence that radically differ from those used by Dreyfus, Descombes, Hacker, Bennett, Tallis, et al. From my vantage point, Turing at times is writing science fiction or speculative philosophy, which of course he was free to do, but too many intellectual fields related to AI (e.g., cognitive science and psychology, the neuroscience, mathematics, and philosophy), have exploited if not manipulated these facets of his work on the order of utopian blueprints for research programs that often detract from if not avoid more modest uses of AI properly cabined by fundamental ethical and moral principles (found largely outside computer science and the sciences more generally) as well as the sundry constraints (political, socio-economic, environmental, etc.) that arise from our deep and abiding concern with human welfare, well-being and flourishing within the parameters framed by conceptions of human dignity and human rights.
Here is a list of titles (far from exhaustive and perhaps a bit idiosyncratic) I think can help us properly assess some of the statements and arguments (including presuppositions and assumptions) made by Turing about AI, as well as, and perhaps more importantly or urgently, updated versions of same made by contemporary enthusiasts of AI and robotics, the latter prone to capitalist technophilia and unabashed indulgence in scientific phantasies in the name of Promethean-like promises of civilizational progress that renders modest the optimism of the European Enlightenment:
- Bennett, M.R. and P.M.S. Hacker. Philosophical Foundations of Neuroscience (Blackwell, 2003).
- Bennett, Maxwell, Daniel Dennett, Peter Hacker, John Searle, and Daniel Robinson. Neuroscience and Philosophy: Brain, Mind and Language (Columbia University Press, 2007). (I favor the arguments of Bennett, Hacker, and Robinson over Dennett and Searle)
- Brakel, Linda A.W. Philosophy, Psychoanalysis and the A-Rational Mind (Oxford University Press, 2009).
- Brakel, Linda A.W. Unconscious Knowing and Other Essays in Psycho-Philosophical Analysis (Oxford University Press, 2010).
- Cassam, Quassim, ed. Self-Knowledge (Oxford University Press, 1994).
- Descombes, Vincent (Stephen Adam Schwartz, tr.) The Mind’s Provisions: A Critique of Cognitivism (Princeton University Press, 2001).
- Dilman, Ilham. Freud and Human Nature (Basil Blackwell, 1983).
- Dilman, Ilham. Freud and the Mind (Basil Blackwell, 1984).
- Dilman, Ilham. Freud: Insight and Change (Basil Blackwell, 1988).
- Dilman, Ilham. Raskolnikov’s Rebirth: Psychology and the Understanding of Good and Evil (Open Court, 2000).
- Dreyfus, Hubert L. What Computers Still Can’t Do: A Critique of Artificial Reason (Cambridge, MA: MIT Press, revised ed., 1992 [1979]).
- Dreyfus, Hubert L. and Stuart E. Dreyfus. Mind over Machine (Free Press, 1986).
- Elster, Jon, ed., Multiple Selves (Cambridge University Press, 1986).
- Finkelstein, David H. Expression and the Inner. (Harvard University Press, 2003).
- Ganeri, Jonardon. The Self: Naturalism, Consciousness, and the First-Person Stance (Oxford University Press, 2012).
- Gillett, Grant. Subjectivity and Being Somebody: Human Identity and Neuroethics (Imprint Academic, 2008).
- Gillett, Grant. The Mind and Its Discontents (Oxford University Press, 2nd ed., 2009).
- Hacker, P.M.S. Human Nature: The Categorial Framework (Blackwell, 2007).
- Hacker, P.M.S. The Intellectual Powers: A Study of Human Nature (Wiley-Blackwell, 2013).
- Hacker, P.M.S. The Passions: A Study of Human Nature (John Wiley & Sons, 2018).
- Hacker, P.M.S. The Moral Powers: A Study of Human Nature ((John Wiley & Sons, 2021).
- Hodgson, David. The Mind Matters: Consciousness and Choice in a Quantum World (Oxford University Press, 1991).
- Horst, Steven. Beyond Reduction: Philosophy of Mind and Post-Reductionist Philosophy of Science (Oxford University Press, 2007).
- Hutto, Daniel D. The Presence of Mind (John Benjamins, 1999).
- Hutto, Daniel D. Beyond Physicalism (John Benjamins, 2000).
- Hutto, Daniel D. Folk Psychological Narratives: The Sociocultural Basis of Understanding (MIT Press, 2008).
- Koch, Christof. The Feeling of Life Itself: Why Consciousness Is Widespread but Can’t Be Computed (MIT Press, 2019).
- Laden, Anthony Simon. Reasoning: A Social Picture (Oxford University Press, 2014).
- Larson, Erik J. The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do (Belknap Press of Harvard University Press, 2021).
- Lear, Jonathan. Love and Its Place in Nature: A Philosophical Interpretation of Freudian Psychoanalysis (Farrar, Straus & Giroux, 1990).
- Lear, Jonathan. Open Minded: Working Out the Logic of the Soul (Harvard University Press, 1998).
- Lear, Jonathan. Wisdom Won from Illness: Essays in Philosophy and Psychoanalysis (Harvard University Press, 2017).
- Midgley, Mary. Beast and Man: The Roots of Human Nature (Routledge, revised ed., 1995).
- Pardo, Michael S. and Dennis Patterson. Minds, Brains, and Law: The Conceptual Foundations of Law and Neuroscience (Oxford University Press, 2013).
- Parfit, Derek. Reasons and Persons (Oxford University Press, with corrections, 1987).
- Patterson, Dennis and Michael S. Pardo, eds. Philosophical Foundations of Law and Neuroscience (Oxford University Press, 2016).
- Putnam, Hilary. Reason, Truth, and History (Cambridge University Press, 1981).
- Putnam, Hilary. The Collapse of the Fact/Value Dichotomy and Other Essays (Harvard University Press, 2002).
- Rorty, Amélie Oksenberg. Mind in Action: Essays in the Philosophy of Mind (Beacon Press, 1988).
- Smith, Christian. What Is a Person? (University of Chicago Press, 2010).
- Tallis, Raymond. The Explicit Animal: A Defence of Human Consciousness (St. Martin’s Press, 1999 ed.).
- Tallis, Raymond. I Am: An Inquiry into First-Person Being (Edinburgh University Press, 2004).
- Tallis, Raymond. The Knowing Animal: A Philosophical Inquiry into Knowledge and Truth (Edinburgh University Press, 2004).
- Tallis, Raymond. Aping Mankind: Neuromania, Darwinitis and the Misrepresentation of Humanity (Acumen, 2011).
- Tallis, Raymond. Seeing Ourselves: Reclaiming Humanity from God and Science (Agenda Publishing, 2020).
- Turkle, Sherry. The Second Self: Computers and the Human Spirit (MIT Press, 2005 ed. [1st ed., 1984]).
- Turkle, Sherry. Alone Together: Why We Expect More from Technology and Less from Each Other (Basic Books, 2011).
- Velleman, J. David. Self to Self: Selected Essays (Cambridge University Press, 2006).
- Velleman, J. David. How We Get Along (Cambridge University Press, 2009).
- Wollheim, Richard. The Thread of Life (Harvard University Press, 1984).
- Wollheim, Richard. The Mind and Its Depths (Harvard University Press, 1993).
Sorry! Erratum: It is Chalmers as in David Chalmers, not Chalmer. By the way, he has a new book out called Reality+, an exploration of VR or Virtual Reality. I have not read it yet, but from some of his previous work that I have read, it is not surprising that he is taking this up. How do we know whether we live in a simulation, and how would we know? If this sounds like a replay of Cartesian doubt or radical skepticism, I gather from the reviews that Chalmers does a great deal more with it, as he typically does with every topic that he touches.
Posted by: Richard Melton | 04/24/2022 at 08:47 AM
Patrick O’Donnell’s paper provides a rich setting for looking at the familiar conundrum of “intelligent” machines as distinct from human “intelligence,” experience, and agency. Though I am not as skeptical as he about the meaningfulness, coherence, or possibility of “machine learning” or, for that matter, even the plausibility of teaching robots the difference between right and wrong in some sense, I do agree that we are far from anything that would qualify as equivalent to (or a “replica” of) human consciousness, agency, and autonomy in all their apparent dimensions.
I say “apparent” dimensions because we are also very far indeed from having all that figured out. Often, all we have is observable behavior to fall back on, and as Patrick points out correctly, that can be an elusive guide, to say the least.
But when we speak, for example, of teaching robots the difference between right and wrong, we need not presuppose that the robot must have all the capacities that we associate with human moral judgment. We must also remember that there is still a great deal of dispute within moral philosophy and moral psychology as to what those capacities are.
It is also true that the machine will possess the biases and fallibility of its creators, whether the inputs and the resulting “rules” are created by one person or a million people. But I cannot think of any barrier, in principle, to “teaching” machines how to decide, within a certain specified context, whether it is right or wrong, for example, to shoot someone who is innocently walking down a street if the AI entity is, let us say, a security guard.
All of this is on a very long continuum, but as Wallach, Allen, and others have pointed out, if we have machines out there on their own in the world doing things like running the electrical grid, assisting people in hospitals, driving a vehicle, or guarding a retail store, we need to start dealing with the “entity’s” need for some kind of sub-routine for whether to do one thing or another.
It may not have full moral agency in the more complex sense, and it may be foolish to describe what it does as making a moral judgment as an autonomous moral agent, but it is doing something that may, in the best case, prevent a bad accident or even a catastrophe.
There is always the larger decision that humans can make about whether to have these AI robots out in the world doing anything owing to their limited ability, for now, to discern “right” from “wrong.” Maybe it is just too dangerous, but isn’t that genie out of the bottle by now?
There is much more to say about all this, and I hope a robust conversation will follow. We need also to visit some of the philosophical questions, though solving the “hard problem of consciousness” is likely not in the cards. Is Dennett or Chalmer closer to the “truth” on that question. Or neither.
Posted by: Richard Melton | 04/23/2022 at 10:09 AM