« Sidney Morgenbesser: a philosophical practitioner of “street medicine” | Main | Are we, or might we become, artificial intelligences “living” in a virtual or artificial reality (a ‘simulation’)? »

04/22/2022

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Richard Melton

Sorry! Erratum: It is Chalmers as in David Chalmers, not Chalmer. By the way, he has a new book out called Reality+, an exploration of VR or Virtual Reality. I have not read it yet, but from some of his previous work that I have read, it is not surprising that he is taking this up. How do we know whether we live in a simulation, and how would we know? If this sounds like a replay of Cartesian doubt or radical skepticism, I gather from the reviews that Chalmers does a great deal more with it, as he typically does with every topic that he touches.

Richard Melton

Patrick O’Donnell’s paper provides a rich setting for looking at the familiar conundrum of “intelligent” machines as distinct from human “intelligence,” experience, and agency. Though I am not as skeptical as he about the meaningfulness, coherence, or possibility of “machine learning” or, for that matter, even the plausibility of teaching robots the difference between right and wrong in some sense, I do agree that we are far from anything that would qualify as equivalent to (or a “replica” of) human consciousness, agency, and autonomy in all their apparent dimensions.

I say “apparent” dimensions because we are also very far indeed from having all that figured out. Often, all we have is observable behavior to fall back on, and as Patrick points out correctly, that can be an elusive guide, to say the least.

But when we speak, for example, of teaching robots the difference between right and wrong, we need not presuppose that the robot must have all the capacities that we associate with human moral judgment. We must also remember that there is still a great deal of dispute within moral philosophy and moral psychology as to what those capacities are.

It is also true that the machine will possess the biases and fallibility of its creators, whether the inputs and the resulting “rules” are created by one person or a million people. But I cannot think of any barrier, in principle, to “teaching” machines how to decide, within a certain specified context, whether it is right or wrong, for example, to shoot someone who is innocently walking down a street if the AI entity is, let us say, a security guard.

All of this is on a very long continuum, but as Wallach, Allen, and others have pointed out, if we have machines out there on their own in the world doing things like running the electrical grid, assisting people in hospitals, driving a vehicle, or guarding a retail store, we need to start dealing with the “entity’s” need for some kind of sub-routine for whether to do one thing or another.
It may not have full moral agency in the more complex sense, and it may be foolish to describe what it does as making a moral judgment as an autonomous moral agent, but it is doing something that may, in the best case, prevent a bad accident or even a catastrophe.

There is always the larger decision that humans can make about whether to have these AI robots out in the world doing anything owing to their limited ability, for now, to discern “right” from “wrong.” Maybe it is just too dangerous, but isn’t that genie out of the bottle by now?
There is much more to say about all this, and I hope a robust conversation will follow. We need also to visit some of the philosophical questions, though solving the “hard problem of consciousness” is likely not in the cards. Is Dennett or Chalmer closer to the “truth” on that question. Or neither.


The comments to this entry are closed.