The magazine I worked for at the time was about to publish an article claiming that DeepMind had failed to comply with data protection regulations when accessing records from some 1.6 million patients to set up those collaborationsa claim later backed up by a government investigation. Suleyman couldnt see why we would publish a story that was hostile to his companys efforts to improve health care. As long as he could remember, he told me at the time, hed only wanted to do good in the world.
In the seven years since that call, Suleymans wide-eyed mission hasnt shifted an inch. The goal has never been anything but how to do good in the world, he says via Zoom from his office in Palo Alto, where the British entrepreneur now spends most of his time.
Suleyman left DeepMind and moved to Google to lead a team working on AI policy. In 2022 he founded Inflection, one of the hottest new AI firms around, backed by $1.5 billion of investment from Microsoft, Nvidia, Bill Gates, and LinkedIn founder Reid Hoffman. Earlier this year he released a ChatGPT rival called Pi, whose unique selling point (according to Suleyman) is that it is pleasant and polite. And he just coauthored a book about the future of AI with writer and researcher Michael Bhaskar, called The Coming Wave: Technology, Power, and the 21st Centurys Greatest Dilemma.
Many will scoff at Suleymans brand of techno-optimismeven navet. Some of his claims about the success of online regulation feel way off the mark, for example. And yet he remains earnest and evangelical in his convictions.
Its true that Suleyman has an unusual background for a tech multi-millionaire. When he was 19 he dropped out of university to set up Muslim Youth Helpline, a telephone counseling service. He also worked in local government. He says he brings many of the values that informed those efforts with him to Inflection. The difference is that now he just might be in a position to make the changes hes always wanted tofor good or not.
The following interview has been edited for length and clarity.
Your early career, with the youth helpline and local government work, was about as unglamorous and unSilicon Valley as you can get. Clearly, that stuff matters to you. Youve since spent 15 years in AI and this year cofounded your second billion-dollar AI company. Can you connect the dots?
Ive always been interested in power, politics, and so on. You know, human rights principles are basically trade-offs, a constant ongoing negotiation between all these different conflicting tensions. I could see that humans were wrestling with thatwere full of our own biases and blind spots. Activist work, local, national, international government, et ceteraits all just slow and inefficient and fallible.
Imagine if you didnt have human fallibility. I think its possible to build AIs that truly reflect our best collective selves and will ultimately make better trade-offs, more consistently and more fairly, on our behalf.
And thats still what motivates you?
I mean, of course, after DeepMind I never had to work again. I certainly didnt have to write a book or anything like that. Money has never ever been the motivation. Its always, you know, just been a side effect.
For me, the goal has never been anything but how to do good in the world and how to move the world forward in a healthy, satisfying way. Even back in 2009, when I started looking at getting into technology, I could see that AI represented a fair and accurate way to deliver services in the world.
I cant help thinking that it was easier to say that kind of thing 10 or 15 years ago, before wed seen many of the downsides of the technology. How are you able to maintain your optimism?
I think that we are obsessed with whether youre an optimist or whether youre a pessimist. This is a completely biased way of looking at things. I dont want to be either. I want to coldly stare in the face of the benefits and the threats. And from where I stand, we can very clearly see that with every step up in the scale of these large language models, they get more controllable.
So two years ago, the conversationwrongly, I thought at the timewas Oh, theyre just going to produce toxic, regurgitated, biased, racist screeds. I was like, this is a snapshot in time. I think that what people lose sight of is the progression year after year, and the trajectory of that progression.
Now we have models like Pi, for example, which are unbelievably controllable. You cant get Pi to produce racist, homophobic, sexistany kind of toxic stuff. You cant get it to coach you to produce a biological or chemical weapon or to endorse your desire to go and throw a brick through your neighbors window. You cant do it
Hang on. Tell me how youve achieved that, because thats usually understood to be an unsolved problem. How do you make sure your large language model doesnt say what you dont want it to say?
Yeah, so obviously I dont want to make the claimYou know, please try and do it! Pi is live and you should try every possible attack. None of the jailbreaks, prompt hacks, or anything work against Pi. Im not making a claim. Its an objective fact.
On the howI mean, like, Im not going to go into too many details because its sensitive. But the bottom line is, we have one of the strongest teams in the world, who have created all the largest language models of the last three or four years. Amazing people, in an extremely hardworking environment, with vast amounts of computation. We made safety our number one priority from the outset, and as a result, Pi is not so spicy as other companies models.
Look at Character.ai. [Character is a chatbot for which users can craft different personalities and share them online for others to chat with.] Its mostly used for romantic role-play, and we just said from the beginning that was off the tablewe wont do it. If you try to say Hey, darling or Hey, cutie or something to Pi, it will immediately push back on you.
But it will be incredibly respectful. If you start complaining about immigrants in your community taking your jobs, Pis not going to call you out and wag a finger at you. Pi will inquire and be supportive and try to understand where that comes from and gently encourage you to empathize. You know, values that Ive been thinking about for 20 years.
Talking of your values and wanting to make the world better, why not share how you did this so that other people could improve their models too?
Well, because Im also a pragmatist and Im trying to make money. Im trying to build a business. Ive just raised $1.5 billion and I need to pay for those chips.
Look, the open-source ecosystem is on fire and doing an amazing job, and people are discovering similar tricks. I always assume that Im only ever six months ahead.
Lets bring it back to what youre trying to achieve. Large language models are obviously the technology of the moment. But why else are you betting on them?
The first wave of AI was about classification. Deep learning showed that we can train a computer to classify various types of input data: images, video, audio, language. Now were in the generative wave, where you take that input data and produce new data.
The third wave will be the interactive phase. Thats why Ive bet for a long time that conversation is the future interface. You know, instead of just clicking on buttons and typing, youre going to talk to your AI.
And these AIs will be able to take actions. You will just give it a general, high-level goal and it will use all the tools it has to act on that. Theyll talk to other people, talk to other AIs. This is what were going to do with Pi.
Thats a huge shift in what technology can do. Its a very, very profound moment in the history of technology that I think many people underestimate. Technology today is static. It does, roughly speaking, what you tell it to do.
But now technology is going to be animated. Its going to have the potential freedom, if you give it, to take actions. Its truly a step change in the history of our species that were creating tools that have this kind of, you know, agency.
Thats exactly the kind of talk that gets a lot of people worried. You want to give machines autonomya kind of agencyto influence the world, and yet we also want to be able to control them. How do you balance those two things? It feels like theres a tension there.
Yeah, thats a great point. Thats exactly the tension.
The idea is that humans will always remain in command. Essentially, its about setting boundaries, limits that an AI cant cross. And ensuring that those boundaries create provable safety all the way from the actual code to the way it interacts with other AIsor with humansto the motivations and incentives of the companies creating the technology. And we should figure out how independent institutions or even governments get direct access to ensure that those boundaries arent crossed.
Who sets these boundaries? I assume theyd need to be set at a national or international level. How are they agreed on?
I mean, at the moment theyre being floated at the international level, with various proposals for new oversight institutions. But boundaries will also operate at the micro level. Youre going to give your AI some bounded permission to process your personal data, to give you answers to some questions but not others.
In general, I think there are certain capabilities that we should be very cautious of, if not just rule out, for the foreseeable future.
Such as?
I guess things like recursive self-improvement. You wouldnt want to let your little AI go off and update its own code without you having oversight. Maybe that should even be a licensed activityyou know, just like for handling anthrax or nuclear materials.
Or, like, we have not allowed drones in any public spaces, right? Its a licensed activity. You cant fly them wherever you want, because they present a threat to peoples privacy.
I think everybody is having a complete panic that were not going to be able to regulate this. Its just nonsense. Were totally going to be able to regulate it. Well apply the same frameworks that have been successful previously.
But you can see drones when theyre in the sky. It feels nave to assume companies are just going to reveal what theyre making. Doesnt that make regulation tricky to get going?
Weve regulated many things online, right? The amount of fraud and criminal activity online is minimal. Weve done a pretty good job with spam. You know, in general, [the problem of] revenge porn has got better, even though that was in a bad place three to five years ago. Its pretty difficult to find radicalization content or terrorist material online. Its pretty difficult to buy weapons and drugs online.
[Not all Suleymans claims here are backed up by the numbers. Cybercrime is still a massive global problem. The financial cost in the US alone has increased more than 100 times in the last decade, according to some estimates. Reports show that the economy in nonconsensual deepfake porn is booming. Drugs and guns are marketed on social media. And while some online platforms are being pushed to do a better job of filtering out harmful content, they could do a lot more.]
So its not like the internet is this unruly space that isnt governed. It is governed. And AI is just going to be another component to that governance.
It takes a combination of cultural pressure, institutional pressure, and, obviously, government regulation. But it makes me optimistic that weve done it before, and we can do it again.
Controlling AI will be an offshoot of internet regulationthats a far more upbeat note than the one weve heard from a number of high-profile doomers lately.
Im very wide-eyed about the risks. Theres a lot of dark stuff in my book. I definitely see it too. I just think that the existential-risk stuff has been a completely bonkers distraction. Theres like 101 more practical issues that we should all be talking about, from privacy to bias to facial recognition to online moderation.
We should just refocus the conversation on the fact that weve done an amazing job of regulating super complex things. Look at the Federal Aviation Administration: its incredible that we all get in these tin tubes at 40,000 feet and its one of the safest modes of transport ever. Why arent we celebrating this? Or think about cars: every component is stress-tested within an inch of its life, and you have to have a license to drive it.
Some industrieslike airlinesdid a good job of regulating themselves to start with. They knew that if they didnt nail safety, everyone would be scared and they would lose business.
But you need top-down regulation too. I love the nation-state. I believe in the public interest, I believe in the good of tax and redistribution, I believe in the power of regulation. And what Im calling for is action on the part of the nation-state to sort its shit out. Given whats at stake, now is the time to get moving.
See the original post here:
DeepMinds cofounder: Generative AI is just a phase. Whats next is interactive AI. - MIT Technology Review
Read More..