For the five years, I've been working with Sophia, the world's most expressive humanoid robot (and the first robot citizen), and the other amazing creations of social robotics pioneer Dr. David Hanson. During this time, I've been asked a few questions over and over again.
Some of these are not so intriguing like, "Can I take Sophia out on a date?"
But there are some questions that hold more weight and lead to even deeper moral and philosophical discussions questions such as "Why do we really want robots that look and act like humans, anyway?"
This is the question I aim to address.
The easiest answer here is purely practical. Companies are going to make, sell and lease humanoid robots because a significant segment of the population wants humanoid robots. If some people aren't comfortable with humanoid robots, they don't need to buy or rent them.
I stepped back from my role as chief scientist of Hanson Robotics earlier this year so as to devote more attention to my role as CEO of SingularityNET, but I am still working on the application of artificial general intelligence (AGI) and decentralized AI to social robotics.
At the web summit this November, I demonstrated the OpenCog neural-symbolic AGI engine and the SingularityNET blockchain-based decentralized AI platform controlling David Hanson's Philip K. Dick robot (generously loaned to us by Dan Popa's lab at the University of Louisville). The ability of modern AI tools to generate philosophical ruminations in the manner of Philip K. Dick (PKD) is fascinating, beguiling and a bit disorienting. You can watch a video of the presentation here to see what these robots are like.
While the presentation garnered great enthusiasm, I also got a few people coming to me with the "Why humanoid robots?" question but with a negative slant. Comments in the vein of "Isn't it deceptive to make robots that appear like humans even though they don't have humanlike intelligence or consciousness?"
To be clear, I'm not in favor of being deceptive. I'm a fan of open-source software and hardware, and my strong preference is to be transparent with product and service users about what's happening behind the magic digital curtain. However, the bottom line is that "it's complicated."
There is no broadly agreed theory of consciousness of the nature of human or animal consciousness, or the criteria a machine would need to fulfill to be considered as conscious as a human (or more so).
And intelligence is richly multidimensional. Technologies like AlphaZero and Alexa, or the AI genomic analysis software used by biologists, are far smarter than humans in some ways, though sorely lacking in certain aspects such as self-understanding and generalization capability. As research pushes gradually toward AGI, there may not be a single well-defined threshold at which "humanlike intelligence" is achieved.
A dialogue system like the one we're using in the PKD robot incorporates multiple components some human-written dialogue script fragments, a neural network for generating text in the vein of PKD's philosophical writings and some simple reasoning. One thread in our ongoing research focuses on more richly integrating machine reasoning with neural language generation. As this research advances, the process of the PKD robot coming to "really understand what it's talking about" is probably going to happen gradually rather than suddenly.
It's true that giving a robot a humanoid form, and especially an expressive and reactive humanlike face, will tend to bias people to interact with the robot as if it really had human emotion, understanding and culture. In some cases this could be damaging, and it's important to take care to convey as accurately as feasible to the people involved what kind of system they're interacting with.
However, I think the connection that people tend to feel with humanoid robots is more of a feature than a bug. I wouldn't want to see human-robot relationships replace human-human relationships. But that's not the choice we're facing.
McDonald's, for instance, has bought an AI company and is replacing humans with touchpad-based kiosks and automated voice systems, for cost reasons. If people are going to do business with machines, let them be designed to create and maintain meaningful social and emotional connections with people.
As well as making our daily lives richer than they would be in a world dominated by faceless machines, humanoid robots have the potential to pave the way toward a future in which humans and robots and other AIs interact in mutually compassionate and synergetic ways.
As today's narrow AI segues into tomorrow's AGI, how will emerging AGI minds come to absorb human values and culture?
Hard-coded rules regarding moral values can play, at best, a very limited role, e.g., in situations like a military robot deciding who to kill, or a loan-officer AI deciding who to loan funds to. The vast majority of real-life ethical decisions are fuzzy, uncertain and contextual in nature the kind of thing that needs to be learned by generalization from experience and by social participation.
The best way for an AI to absorb human culture is the way kids do, through rich participation in human life. Of course, the architecture of the AI's mind also matters. It has to be able to represent and manipulate thought and behavior patterns as nebulous as human values. But the best cognitive architecture won't help if the AI doesn't have the right experience base.
So my ultimate answer to why should we have humanoid robots is not just because people want them or because they are better for human life and culture than faceless kiosks but because they are the best way I can see to fill the AGI mind of the future with human values and culture.
View post:
Why We Need Humanoid Robots Instead Of Faceless Kiosks - Forbes
- Marcus vs Bengio AI Debate: Gary Marcus Is The Villain We Never Needed - Analytics India Magazine [Last Updated On: January 3rd, 2020] [Originally Added On: January 3rd, 2020]
- AI Could Save the World, If It Doesnt Ruin the Environment First - PCMag Portugal [Last Updated On: April 19th, 2020] [Originally Added On: April 19th, 2020]
- AI's Carbon Footprint Issue Is Too Big To Be Ignored - Analytics India Magazine [Last Updated On: December 23rd, 2020] [Originally Added On: December 23rd, 2020]
- Towards Broad Artificial Intelligence (AI) & The Edge in 2021 - BBN Times [Last Updated On: June 16th, 2021] [Originally Added On: June 16th, 2021]
- Future Prospects of Data Science with Growing Technologies - Analytics Insight [Last Updated On: June 29th, 2021] [Originally Added On: June 29th, 2021]
- Attempt to compare different types of intelligence falls a bit short - Ars Technica [Last Updated On: January 2nd, 2022] [Originally Added On: January 2nd, 2022]
- The age of AI-ism - TechTalks [Last Updated On: January 16th, 2022] [Originally Added On: January 16th, 2022]
- Bin Yu [Last Updated On: February 15th, 2022] [Originally Added On: February 15th, 2022]
- Meta AI Boss: current AI methods will never lead to true intelligence - Gizchina.com [Last Updated On: September 30th, 2022] [Originally Added On: September 30th, 2022]
- Singapores Central Bank Partners With Google to Explore AI for Internal Use - 24/7 Wall St. [Last Updated On: June 4th, 2023] [Originally Added On: June 4th, 2023]
- Relevance of Software Developers in the Era of Prompt Engineering - Analytics India Magazine [Last Updated On: June 15th, 2023] [Originally Added On: June 15th, 2023]
- The Race for AGI: Approaches of Big Tech Giants - Fagen wasanni [Last Updated On: July 27th, 2023] [Originally Added On: July 27th, 2023]
- Creative Machines: The Future is Now with Arthur Miller - CUNY Graduate Center [Last Updated On: September 25th, 2023] [Originally Added On: September 25th, 2023]