Trying To See: Part Three: A Locomotive Coming Fast – Tillamook Headlight-Herald

This column is the third part of a three-part series on the origins, present state, and future consequences of artificial intelligence (AI). The following column deals with the future benefits and risks of AIs development.

Theoretical physicist Stephen Hawking died in 2018 from ALS (Lou Gehrigs) disease. In 2015, three years before he died, Hawking warned that development of full artificial intelligence (today called artificial general intelligence, or AGI) could spell the end of the human race.

We all have heard dire warnings that increasingly powerful computers will, sooner than later, take off on their own and become capable of redesigning and replicating themselves, outstripping humans abilities to control them. Such a moment in the planets history has been called a singularity, the beginning of humanitys end, and the next step in evolution of higher intelligence on Earth--an age in which super-intelligent machines (AGI) will rule the planet.

Such hand-wringing sounds like dire science fiction, but there also are numerous optimists who predict much happier outcomes. These include discovery of new and sustainable technologies, a more equitable distribution of wealth and hopes for various, long-term societal benefits. We do already benefit greatly from AI in medicine, science research, transportation, finance, education and other fields of human endeavor.

AGI optimists point out that increasingly powerful algorithms and computer learning can incorporate human values and ethics, possessing qualities like compassion and empathy. They believe AGI will put itself in humanitys shoes, then act on those values to bring benefit to human societies. It certainly is better to live with hope and aspiration, but who gets to defined those values and ethics.

Algorithms are written by many kinds of humans: creative, idealistic, ambitious, generous, competitive--some ethically indifferent or even cruel. Whether consciously or unconsciously, computer coders program algorithms with their own conscious desires and unconscious biases. Various human cultures and smaller groups have different interpretations of what is good and bad. Today, in our politics we even disagree on what constitutes truth and reality. Powerful corporations or banks or criminal groups will try to make AGI take actions that create more wealth and power for them; it is in their nature and their mission to do so. What is to stop them from wreaking havoc on the rest of us?

With their unrelenting desire for more, and our ever-more powerful and seductive technology, they can only be slowed down by more responsible humans. But have world governments been able to stem the spread of nuclear weapons, or respond effectively to human-caused climate change? Have governments been able to control the growth of criminal gangs, drug syndicates, and world-wide weapons sales? Has the US government halted the surging rise of our national debt, or moderated the publics addiction to social media platforms that tear us apart?

What if rapidly more sophisticated AGI outstrips our capacity to control it? What would AGI decide to do regarding human over-population and its degradation of Earths resources, our increasingly destructive weather, sea-level rise, or other consequences of climate change, including our inadequate supply and distribution of water. Would AGI continue the relentlessly increasing concentration of wealth and power in smaller and smaller groups of people and corporations. Or would AGI see those power centers as a threat to its own desires? How would AGI deal with the threat of nuclear war, humans fears of people who look different than them or the exploding number of refugees in the world, or the increasing complexities of modern societies that struggle to repair and replace crumbling infrastructure.

How would AGI deal with the worlds violent political and religious factions that have been inflamed, then self-organized, through use of social media? What would AGI do about the collapse of nation states (Soviet Union, Haiti, Somalia, Yemen, and others yet to come). What would AGI do about whole regions of humanity that already have returned to a state of nature where coercion, violence, and terror prevail?

How would AGI networks, learning of all the human-created problems described above, deal with them? Would AGI require us to reduce our current demand for evermore pleasures and products, thus reducing our current levels of excess consumption? Would AGI think democracy and continued freedoms are still important enough to pander to long-complacent people who pay hardly any attention to voting in elections, or who are indifferent to strangers needs or the needs of their larger society?

Or would powerful AGI machines, driven by their own logic, decide to solve these seemingly intractable problems by dealing forcefully with those who persist in being acquisitive, rebellious, or violent? Would AGI redesign the human genome to create more compliant humans, who by their natures would be subservient to AGIs authority?

No one really can foresee the consequences of AGI, although we already yield to some of its elements, whether beneficial, entertaining or intrusive. Many of us also have become more screen-dependent, passive, less empathetic and less sociable--like Zoom users who resist face-to-face meetings and contacts, claiming them to be inconvenient. Given these human tendencies, plus the increasing power of AGI tools, AGI is on its way to changing the course of human history.

See original here:

Trying To See: Part Three: A Locomotive Coming Fast - Tillamook Headlight-Herald

Related Posts

Comments are closed.