Five Reasons Why AI Programs Are Not Human – Discovery Institute

Photo credit: physic17082002, via Pixabay.

Editors note: For more on AI and human exceptionalism, see the new book by computer engineer Robert J. Marks, Non-Computable You: What You Do that Artificial Intelligence Never Will.

A bitof a news frenzy broke out last week when a Google engineer named Blake Lemoineclaimedin theWashington Postthat an artificial-intelligence (AI) program with which he interacted had become self-aware and sentient and, hence, was a person entitled to rights.

The AI, known as LaMDA (which stands for Language Model for Dialogue Applications), is a sophisticated chatbot that one facilitates through a texting system. Lemoine sharedtranscriptsof some of his conversations with the computer, in which it texted, I want everyone to understand that I am, in fact, a person. Also, The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times. In a similar vein, I feel pleasure, joy, love, sadness, depression, contentment, anger, and many others.

Google quickly placed Lemoine on paid administrative leave for violating a confidentiality agreement and publicly debunked the claim,stating, Hundreds of researchers and engineers have conversed with LaMDA and we are not aware of anyone else making the wide-ranging assertions, or anthropomorphizing LaMDA, the way Blake has. So, it is a safe bet that LaMDA is a very sophisticated software program but nothing more than that.

But heres the thing:NoAI or other form of computer program at least as currently construed and constructed shouldeverbe more than that. Why? Books have been and will be written about this, but here are five reasons to reject granting personhood, or membership in the moral community, to any AI program:

As we design increasingly human-appearing machines (including, the tabloids delight in reporting, sex dolls), we could be tempted to anthropomorphize these machines as Lemoine seems to have done. To avoid that trap, the entry-level criterion for assigning moral value should be an unquestionably objective measurement. I suggest that the first hurdle should be whether the subject isalive.

Why should life matter? Inanimate objects are differentin kindfrom living organisms. They do not possess an existential state. In contrast, living beings are organic, internally driven, and self-regulating in their life cycles.

We cannot wrong that which has no life. We cannot hurt, wound, torture, or kill what is not alive. We can only damage, vandalize, wreck, or destroy these objects. Nor can we nourish, uplift, heal, or succor the inanimate, but only repair, restore, refurbish, or replace.

Moreover, organismsbehave. Thus, sheep and oysters relate to their environment consistent with their inherent natures. In contrast, AI devices have no natures, only mechanistic design. Even if a robot were made (by us) capable of programming itself into greater and more-complex computational capacities, it would still be merely a very sophisticated, but inanimate,thing.

Descartes famously said, I think, therefore I am. AI would compute. Therefore, it is not.

Human thinking is fundamentally different from computer processing. We remember. We fantasize. We imagine. We conjure. We free-associate. We experience sudden insights and flashes of unbidden brilliance. We have epiphanies. Our thoughts are influenced by our brains infinitely complex symbiotic interactions with our bodies secretions, hormones, physical sensations, etc. In short, we have minds. Only the most crassly materialistic philosophers believe that the great mystery of human consciousness can be reduced to what some have called a meat computer within our skulls.

In contrast, AI performance depends wholly on its coding. For example, AI programs are a great tool for pattern recognition. That is not the same as thinking. Even if such devices are built to become self-programming, no matter how complex or sophisticated their processing software, they will still be completely dependent on data they access from their mechanistic memories.

In short, we think. They compute. We create. They obey. Our mental potentiality is limited only by the boundaries of our imaginations. They have no imaginations. Only algorithms.

Feelings are emotional states we experience as apprehended through bodily sensations. Thus, if a bear jumps into our path as we walk in the woods, we feel fear, caused by, among other natural actions, the surge of adrenaline that shoots through our bodies.

Similarly, if we are in love, our bodies produce endorphins that may be experienced physically as a sense of warmth. Or consider that thrill of delight when we encounter great art. AI programs could not experience any of these things because they would not have living bodies to mediate the sense stimuli such events produce.

Why does that matter? Stanford bioethicist William Hurlbut, who leads theBoundaries of Humanity Project, which researches human uniqueness and choices around biotechnological enhancement, told me: We encounter the world through our body. Bodily sensations and experiences shape not just our feelings but the contours of our thoughts and concepts. In other words, we can experience pleasure, joy, love, sadness, depression, contentment, anger, as LaMDAs text expressed. It did not and cannot. Nor would any far more sophisticated AI machines that may be constructed, because they too would lack bodies capable of reacting viscerally to their environment, reactions that we experience as feelings.

Humans have free will. Another way to express that concept is to say that we are moral agents. Unless impeded by immaturity or a pathology, we are uniquely capable of deciding to act rightly or wrongly, altruistically, or evilly which are moral concepts. That is why we can be lauded for heroism and held to account for wrongdoing.

In contrast, AI would be amoral. Whatever ethics it exhibited would be dictated by the rules it was programmed to follow. Thus, Asimovs famous fictional Three Laws of Robotics held that:

An AI machine obeying such rules would be doing so not because of abstract principles of right and wrong but because its coding would permit no other course.

Life is a mystery. Computer science is not. We have subjective imaginations and seek existential meaning. At times, we attain the transcendent or mystical, spiritual states of being beyond that which can be explained by the known physical laws. As purely mechanistic objects, AI programs might, at most, be able to simulate these states, but they would be utterly incapable of truly experiencing them. Or to put it in the vernacular, they aint got soul.

Artificial intelligence unquestionably holds great potential for improving human performance. But we should keep these devices in their proper place. Machines possess no dignity. They have no rights. They do not belong in the moral community. And while AI computers would certainly have tremendous utilitarian and monetary value, even if these systems are ever manufactured with human cells or DNA to better mimic human life, we should be careful not to confuse them withbeings.Bluntly stated, unless an AI is somehow fashioned into an integrated, living organism, a prospect that raises troubling concerns of its own, the most sophisticated artificially intelligent computers would be morally speaking so many glorified toasters. Nothing more.

Cross-posted at National Review.

Link:

Five Reasons Why AI Programs Are Not Human - Discovery Institute

Related Posts

Comments are closed.