Google’s AI ambassador walks a fine line between hype and doom – The Washington Post

James Manyika is one of Google's top artificial intelligence ambassadors. (Demetrius Philp for The Washington Post)

Updated August 9, 2023 at 4:28 p.m. EDT|Published August 9, 2023 at 10:00 a.m. EDT

MOUNTAIN VIEW, Calif. Amid the excited hype about artificial intelligence at Googles annual developer conference in May, it fell to James Manyika, the companys new head of tech and society, to talk about the downsides of AI.

Before thousands of people packed into an outdoor arena, Manyika discussed the scourge of fake images and how AI echoes societys racism and sexism. New problems will emerge, he warned, as the tech improves.

But rest assured that Google is taking a responsible approach to AI, he told the crowd. The words bold and responsible flashed onto a massive screen, dwarfing Manyika as he spoke.

The phrase has become Googles motto for the AI age, a replacement of sorts for dont be evil, the mantra the company removed from the preamble of its code of conduct in 2018. The phrase sums up Silicon Valleys general message on AI, as many of the tech industrys most influential leaders rush to develop ever more powerful versions of the technology while warning of its dangers and calling for government oversight and regulation.

Manyika, a former technology adviser to the Obama administration who was born in Zimbabwe and has a PhD in AI from Oxford, has embraced this duality in his new role as Googles AI ambassador. He insists the technology will bring astounding benefits to human civilization and that Google is the right steward for this bright future. But shortly after the developers conference, Manyika signed a one sentence statement, along with hundreds of AI researchers, warning that AI poses a risk of extinction on par with pandemics and nuclear war.

AI is an amazing, powerful, transformational technology, Manyika said in a recent interview. At the same time, he allowed, bad things could happen.

Critics say bad things already are happening. Since its release last November, OpenAIs ChatGPT has invented reams of false information, including a fake sexual harassment scandal that named a real law professor. Open source versions of Stability AIs Stable Diffusion model have created a flood of realistic images of child sexual abuse, undermining efforts to combat real-world crimes. An early version of Microsofts Bing grew disturbingly dark and hostile with users. And a recent Washington Post investigation found that several chatbots including Googles Bard recommended dangerously low-calorie diets, cigarettes and even tapeworms as ways to lose weight.

Googles AI products, including Bard, are already causing harm. And thats the problem with boldness in juxtaposition with responsible AI development, said Tamara Kneese, a senior researcher and project director with Data & Society, a nonprofit that studies the effects of AI.

Big tech companies are calling for regulation, Kneese said. But at the same time, they are quickly shipping products with little to no oversight.

Regulators around the world are now scrambling to decide how to regulate the technology, while respected researchers are warning of longer-term harms, including that the tech might one day surpass human intelligence. Theres an AI-focused hearing on Capitol Hill nearly every week.

If AI has trust issues, so does Google. The company has long struggled to persuade users that it can safeguard the vast amount of data it collects from their search histories and email inboxes. The companys reputation is particularly wobbly when it comes to AI: In 2020, it fired well-known AI ethics researcher Timnit Gebru after she published a paper arguing the companys AI could be infected by racism and sexism due to the data it was trained on.

Meanwhile, the tech giant is under significant competitive pressure: Google launched its chatbot earlier this year in a rush to catch up after ChatGPT and other competitorshad already captured the public imagination. Rivals like Microsoft and a host of well-funded start-ups see AI as a way to break Googles grip on the internet economy.

Manyika has stepped with calm confidence into this pressure-cooker moment. A veteran of the global conference circuit, he serves on a stunning number of high-powered boards, including the White House AI advisory council, where he is vice chair. In June, he spoke at the Cannes Lions Festival; in April, he appeared on 60 Minutes. Hes presented in front of the United Nations and is a regular at Davos.

And in every interview, conference talk and blog post, he offers reassurance about Googles role in the AI gold rush, describing the companys approach with those same three words: bold and responsible.

Embrace that tension

The phrase bold and responsible debuted in a blog post in January and has since popped up in every executive interview on AI and the companys quarterly financial reports. It grew out of discussions going back months between Manyika, Google chief executive Sundar Pichai and small group of other executives, including Googles now-Chief Scientist Jeff Dean; Marian Croak, the companys vice president of responsible AI; and Demis Hassabis, the head of DeepMind, an AI start-up Google acquired in 2014.

Critics have noted the inherent contradiction.

What does it mean honestly? said Rebecca Johnson, an AI ethics researcher at the University of Sydney, who worked last year as a visiting researcher at Google. It just sounds like a slogan.

At the May developers conference, Manyika acknowledged a natural tension between the two. But, he said, We believe its not only possible but in fact critical to embrace that tension. The only way to be truly bold in the long term is to be responsible from the start.

Manyika, 57, grew up in segregated Zimbabwe, then known as Rhodesia, an experience that he says showed him the possibilities of what technology advancement and progress can make to ordinary peoples lives and made him acutely sensitive to its dangers.

Zimbabwe was then ruled by an autocratic White government that brutally repressed the countrys majority-Black population, excluding them from serving in government and living in White neighborhoods. I know what a discriminatory system can do with technology, he said, mentioning AI tools like facial recognition. Think of what they could have done with that.

When the apartheid regime crumbled in 1980, Manyika was one of the first Black kids to attend the prestigious Prince Edward School, which educated generations of Zimbabwes White ruling class. We actually took a police escort, he said, which reminded him at the time of watching films about desegregation in the United States.

Manyika went on to study engineering at the University of Zimbabwe, where he met a graduate student from Toronto working on artificial intelligence. It was his first introduction to the science of making machines think for themselves. He learned about Geoffrey Hinton, a researcher who decades later would become known as the godfather of AI and work alongside Manyika at Google. Hinton was working on neural networks technology built on the idea that computers could be made to learn by designing programs that loosely mimicked pathways in the human brain and Manyika was captivated.

He won a Rhodes scholarship to study at Oxford, and dug into that idea, first with a masters in math and computer science and then a PhD in AI and robotics. Most scientists working on making computers more capable believed neural networks and AI had been discredited years earlier, and Manyika said his advisers cautioned him not to mention it because no one will take you seriously.

He wrote his thesis on using AI to manage the input of different sensors for a vehicle, which helped get him a visiting scientist position at NASAs Jet Propulsion Labs. There, he contributed to the Pathfinder mission to land the Sojourner rover on Mars. Next, he and his partner, the British-Nigerian novelist Sarah Ladipo Manyika, moved to Silicon Valley, where he became a consultant for McKinsey and had a front-row seat to the dot-com bubble and subsequent crash. He wrote extensively on how tech breakthroughs impacted the real world, publishing a book in 2011 about how the massive amount of data generated by the internet would become critical to business.

In Silicon Valley, he became known as a connecter, someone who can make a key introduction or suggest a diverse range of candidates for a board position, said Erik Brynjolfsson, director of Stanfords Digital Economy Lab, whos known Manyika for years. He has maybe the best contact list of anyone in this field, Brynjolfsson said.

His job also put him in the orbit of powerful people in Washington. He began having conversations about tech and the economy with senior Obama administration staffers, and was appointed to the White Houses advisory board on innovation and the digital economy, where he helped produce a 2016 report for the Commerce Department warning that AI could displace millions of jobs. He resigned the post in 2017 after President Donald Trump refused to condemn a protest by white supremacists that turned violent in Charlottesville.

By then, AI tech was starting to take off. In the early 2010s, research by Hinton and other AI pioneers had led to major breakthroughs in image recognition, translation and medical discoveries. I was itching to go back much more closely and fully to the research and the field of AI because things were starting to get really interesting, Manyika said.

Instead of just researching trends and writing reports from the outside, he wanted to be at Google. He spoke with Pichai who had previously tried to recruit him and took the job last year.

Google is arguably the preeminent company in AI having entered the field well before OpenAI was a glimmer in Elon Musks eye. Roughly a decade ago, the company stepped up its efforts in the space, launching an expensive talent war with other tech firms to hire the top minds in AI research. Scientists like Hinton left their jobs at universities to work directly for Google, and the company soon became a breakthrough machine.

In 2017, Google researchers put out a paper on transformers a key breakthrough that let AI models digest much more data and laid the foundation for the technology that enables the current crop of chatbots and image-generators to pass professional exams and re-create Van Gogh paintings. That same year, Pichai began pitching the company to investors and employees as AI first.

But the company held off releasing the tech publicly, using it instead to improve its existing cash cow products. When you type movie with green ogre into Google Search and the site spits out a link to Shrek, thats AI. Advances in translation are directly tied to Googles AI work, too.

Then the ground shifted under Googles feet.

In November, ChatGPT was released to the public by OpenAI, a much smaller company initially started by Musk and other tech leaders to act as a counterweight to Big Techs AI dominance. For the first time, people had direct access to this cutting edge tech. The bot captured the attention of consumers and tech leaders alike, spurring Google to push out its own version, Bard, in March.

Months later, Bard is available in 40 languages and nearly every country that isnt on a U.S. sanctions list. Though available to millions, Google still labels the bot an experiment, an acknowledgment of persistent problems. For example, Bard often makes up false information.

Meanwhile, Google has lost some of the star AI researchers it hired during the talent wars, including all eight of the authors of the 2017 transformers paper. Hinton left in May, saying he wanted to be free to speak out about the dangers of AI. The company also undercut its reputation for encouraging academic dissent by firing Gebru and others, including Margaret Mitchell, who was a co-author on the paper Gebru wrote before her firing.

They have lost a lot of the benefit of the doubt that they were good, said Mitchell, now chief ethics scientist at AI start-up Hugging Face.

Do the useful things

Sitting down for an interview, Manyika apologizes for overdressing in a checkered button-down shirt and suit jacket. Its formal for San Francisco. But its the uniform he wears in many of his public appearances.

The conversation, like most in Silicon Valley these days, begins with Manyika declaring how exciting the recent surge of interest in AI is. When he joined the company, AI was just one part of his job as head of tech and society. The role didnt exist before he was hired; its part ambassador and part internal strategist: Manyika shares Googles message with academics, think tanks, the media and government officials, while explaining to Google executives how their tech is interacting with the wider world. He reports directly to Pichai.

As the rush into AI has shifted Silicon Valley and Google along with it, Manyika is suddenly at the center of the companys most important work.

The timing couldnt have been better, said Kent Walker, who as Googles president of global affairs leads the companys lobbying and legal teams. Walker and Manyika have been meeting with politicians in the United States and abroad to address the growing clamor for AI regulation. Manyika, he said, has been a very thoughtful external spokesperson for us.

Manyikas role grew substantially in April when Hassabis took charge of core AI research at the company. The rest of Googles world-class research division went to Manyika. He now directs their efforts on climate change, health care, privacy and quantum computing, as well as AI responsibility.

Despite Googles blistering pace in the AI arms race over the past eight months, Manyika insisted that the company puts out products only when theyre ready for the real world. When Google launched Bard, for example, he said it was powered with an older model that had undergone more training and tweaking, not a more powerful but unproven version.

Being bold doesnt mean hurry up, he said. Bold to me means: Benefit everybody. Do the useful things. Push the frontiers to make this useful.

The November release of ChatGPT introduced the public to generative AI. And I think thats actually great, he said. But Im also grateful for the thoughtful, measured approach that we continue to take with these things.

correction

A previous version of this story inaccurately said Google deleted the phrase "don't be evil" from its code of conduct, and described President of Global Affairs Kent Walker's role as including control of the company's public relations team. Google deleted the phrase only from the preamble to its code of conduct, and Walker does not oversee public relations. This story has been corrected.

Original post:

Google's AI ambassador walks a fine line between hype and doom - The Washington Post

Related Posts

Comments are closed.