Category Archives: Deep Mind

DeepMind AI can predict if DNA mutations are likely to be harmful – New Scientist

Google DeepMinds AlphaMissense AI can predict whether mutations will affect how proteins such as haemoglobin subunit beta (left) or cystic fibrosis transmembrane conductance regulator (right) will function

Google DeepMind

Artificial intelligence firm Google DeepMind has adapted its AlphaFold system for predicting protein structure to assess whether a huge number of simple mutations are harmful.

The adapted system, called AlphaMissense, has done this for 71 million possible mutations of a kind called missense mutations in the 20,000 human proteins, and the results made freely available.

We think this is very helpful for clinicians and human geneticists, says Jun Cheng at Google DeepMind. Hopefully, this can help them to pinpoint the cause of genetic disease.

Almost everyone is born with between about 50 and 100 mutations not found in their parents, resulting in a huge amount of genetic variation between individuals. For doctors sequencing a persons genome in an attempt to find the cause of a disease, this poses an enormous challenge, because there may be thousands of mutations that could be linked to that condition.

AlphaMissense has been developed to try to predict whether these genetic variants are harmless or might produce a protein linked to a disease.

A protein-coding gene tells a cell which amino acids need to be strung together to make a protein, with each set of three DNA letters coding for an amino acid. The AI focuses on missense mutations, which is when one of the DNA letters in a triplet becomes changed to another letter and can result in the wrong amino acid being added to a protein. Depending on where in the protein this happens, it can result in anything from no effect to a crucial protein no longer working at all.

People tend to have about 9000 missense mutations each. But the effects of only 0.1 per cent of the 71 million possible missense mutations we could get have been identified so far.

AlphaMissense doesnt attempt to work out how a missense mutation alters the structure or stability of a protein, and what effect this has on its interactions with other proteins, although understanding this could help find treatments. Instead, it compares the sequence of each possible mutated protein to those of all the proteins that AlphaFold was trained on to see if it looks natural, says iga Avsec at Google DeepMind. Proteins that look unnatural are rated as potentially harmful on a scale from 0 to 1.

Pushmeet Kohli at Google DeepMind uses the term intuition to describe how it works. In some sense, this model is leveraging the intuition that it had gained while solving the task of structure prediction, he says.

Its like if we substitute a word from an English sentence, a person familiar with English can immediately see whether this word substitution will change the meaning of the sentence, says Avsec.

The team says AlphaMissense outperformed other computational methods when tested on known variants.

In an article commenting on the research, Joseph Marsh at the University of Edinburgh, UK, and Sarah Teichmann at the University of Cambridge write that AlphaMissense produced remarkable results in several different tests of its performance and it will be helpful for prioritising which possible disease-causing mutations should be investigated further.

However, such systems can still only aid in the diagnosis process, they write.

Missense mutations are just one of many different kinds of mutations. Bits of DNA can also be added, deleted, duplicated, flipped around and so on. And many disease-causing mutations dont alter proteins, but instead occur in nearby sequences involved in regulating the activity of genes.

Topics:

See original here:
DeepMind AI can predict if DNA mutations are likely to be harmful - New Scientist

DeepMind’s Mustafa Suleyman says this is how to police AI – Business Insider

Mustafa Suleyman, DeepMind cofounder who also founded Inflection AI last year said we should made AI doesn't update code without oversight. Inflection AI

The rapid development of AI has raised questions about whether we're programming our own demise. As AI systems become more powerful, they could pose a greater risk to humanity if AI's goals suddenly stop aligning with ours.

To avoid that kind of doomsday scenario, Mustafa Suleyman, the co-founder of Google's AI division DeepMind, said there are certain capabilities we should rule out when it comes to artificial intelligence.

In a recent interview with the MIT Technology Review, Suleyman suggested that we should rule out "recursive self-improvement," which is the ability for AI to make itself better over time.

"You wouldn't want to let your little AI go off and update its own code without you having oversight," he told the MIT Technology Review. "Maybe that should even be a licensed activity you know, just like for handling anthrax or nuclear materials."

And while there's been a considerable focus on AI regulation at an institutional level just last week, tech execs including Sam Altman, Elon Musk, and Mark Zuckerberg gathered in Washington for a closed-door forum on AI Suleyman added that it's important for people to set limits around how their personal data is used, too.

"Essentially, it's about setting boundaries, limits that an AI can't cross," he told the MIT Technology Review, "and ensuring that those boundaries create provable safety all the way from the actual code to the way it interacts with other AIs or with humans to the motivations and incentives of the companies creating the technology."

Last year, Suleyman cofounded AI startup, Inflection AI, whose chatbot Pi is designed to be a neutral listener and provide emotional support. Suleyman told the MIT Technology Review that though Pi is not "as spicy" as other chatbots it is "unbelievably controllable."

And while Suleyman told the MIT Technology Review he's "optimistic" that AI can be effectively regulated, he doesn't seem to be worried about a singular doomsday event. He told the publication that "there's like 101 more practical issues" we should be focusing on from privacy to bias to facial recognition to online moderation.

Suleyman is just one among several experts in the field sounding off about AI regulation. Demis Hassabis, another DeepMind cofounder, has said that developing artificial general intelligence technologies should be done "in a cautious manner using the scientific method" and involving rigorous experiments and testing.

And Microsoft CEO Satya Nadella has said that the way to avoid "runaway AI" is to make sure we start with using it in categories where humans "unambiguously, unquestionably, are in charge."

Since March, almost 34,000 people including "godfathers of AI" like Geoffrey Hinton and Yoshua Bengio have also signed an open letter from non-profit Future of Life Institute calling for AI labs to pause training on any technology that's more powerful than OpenAI's GPT-4.

Loading...

Read the original:
DeepMind's Mustafa Suleyman says this is how to police AI - Business Insider

DeepMind co-founder predicts "third wave" of AI: machines talking to machines and people – TechSpot

Forward-looking: It looks like the initial hype surrounding generative AI is running out of steam. But according to Mustafa Suleyman, one of the cofounders of DeepMind, generative artificial intelligence is just a phase before the next wave: interactive AI, where machines perform multi-step tasks on their own by talking to other AIs and even people.

Suleyman gave his opinion on the state of AI in an interview with MIT Technology Review last week. He said the first wave of AI was classification, with deep learning classifying types of input data such as images and audio. The second, current AI wave is generative, which takes that input data to produce new data.

The next, third wave, according to Suleyman, will be interactive AI. He says that rather than clicking buttons or typing, users will be talking to their AIs, instructing them to take actions. "You will just give it a general, high-level goal and it will use all the tools it has to act on that. They'll talk to other people, talk to other AIs," Suleyman explained.

It's often argued that generative AI is a bit of a misnomer, seeing as these LLM-powered tools don't show intelligence in the same way as humans or animals. But Suleyman suggests that interactive AI will be closer to the artificial intelligence often seen in sci-fi media. Rather than being "static" like today's technology, phase three AI will be animated, able to carry out its instructions with freedom and agency.

"It's a very, very profound moment in the history of technology that I think many people underestimate," he added.

"Hello, I'm here to help with any tasks that need completing"

Earlier this year, Suleyman's company, Inflection AI, released a rival to ChatGPT called Pi, which he says is nicer, politer, and more focused on being conversational.

While generative AI remains an enterprise-changing, multi-billion-dollar industry, a lot of the hype around the tech has cooled in recent times. Web traffic toward the ChatGPT site is down for the third month in a row, while similar tools have reported flat or declining user growth. However, with the technology expanding in scope and advancing all the time, Suleyman's prediction that a third phase of AI will usher in a new era of technology might be more than just hyperbole. It does sound worryingly a bit like Skynet, though.

Read this article:
DeepMind co-founder predicts "third wave" of AI: machines talking to machines and people - TechSpot

DeepMinds cofounder: Generative AI is just a phase. Whats next is interactive AI. – MIT Technology Review

The magazine I worked for at the time was about to publish an article claiming that DeepMind had failed to comply with data protection regulations when accessing records from some 1.6 million patients to set up those collaborationsa claim later backed up by a government investigation. Suleyman couldnt see why we would publish a story that was hostile to his companys efforts to improve health care. As long as he could remember, he told me at the time, hed only wanted to do good in the world.

In the seven years since that call, Suleymans wide-eyed mission hasnt shifted an inch. The goal has never been anything but how to do good in the world, he says via Zoom from his office in Palo Alto, where the British entrepreneur now spends most of his time.

Suleyman left DeepMind and moved to Google to lead a team working on AI policy. In 2022 he founded Inflection, one of the hottest new AI firms around, backed by $1.5 billion of investment from Microsoft, Nvidia, Bill Gates, and LinkedIn founder Reid Hoffman. Earlier this year he released a ChatGPT rival called Pi, whose unique selling point (according to Suleyman) is that it is pleasant and polite. And he just coauthored a book about the future of AI with writer and researcher Michael Bhaskar, called The Coming Wave: Technology, Power, and the 21st Centurys Greatest Dilemma.

Many will scoff at Suleymans brand of techno-optimismeven navet. Some of his claims about the success of online regulation feel way off the mark, for example. And yet he remains earnest and evangelical in his convictions.

Its true that Suleyman has an unusual background for a tech multi-millionaire. When he was 19 he dropped out of university to set up Muslim Youth Helpline, a telephone counseling service. He also worked in local government. He says he brings many of the values that informed those efforts with him to Inflection. The difference is that now he just might be in a position to make the changes hes always wanted tofor good or not.

The following interview has been edited for length and clarity.

Your early career, with the youth helpline and local government work, was about as unglamorous and unSilicon Valley as you can get. Clearly, that stuff matters to you. Youve since spent 15 years in AI and this year cofounded your second billion-dollar AI company. Can you connect the dots?

Ive always been interested in power, politics, and so on. You know, human rights principles are basically trade-offs, a constant ongoing negotiation between all these different conflicting tensions. I could see that humans were wrestling with thatwere full of our own biases and blind spots. Activist work, local, national, international government, et ceteraits all just slow and inefficient and fallible.

Imagine if you didnt have human fallibility. I think its possible to build AIs that truly reflect our best collective selves and will ultimately make better trade-offs, more consistently and more fairly, on our behalf.

And thats still what motivates you?

I mean, of course, after DeepMind I never had to work again. I certainly didnt have to write a book or anything like that. Money has never ever been the motivation. Its always, you know, just been a side effect.

For me, the goal has never been anything but how to do good in the world and how to move the world forward in a healthy, satisfying way. Even back in 2009, when I started looking at getting into technology, I could see that AI represented a fair and accurate way to deliver services in the world.

I cant help thinking that it was easier to say that kind of thing 10 or 15 years ago, before wed seen many of the downsides of the technology. How are you able to maintain your optimism?

I think that we are obsessed with whether youre an optimist or whether youre a pessimist. This is a completely biased way of looking at things. I dont want to be either. I want to coldly stare in the face of the benefits and the threats. And from where I stand, we can very clearly see that with every step up in the scale of these large language models, they get more controllable.

So two years ago, the conversationwrongly, I thought at the timewas Oh, theyre just going to produce toxic, regurgitated, biased, racist screeds. I was like, this is a snapshot in time. I think that what people lose sight of is the progression year after year, and the trajectory of that progression.

Now we have models like Pi, for example, which are unbelievably controllable. You cant get Pi to produce racist, homophobic, sexistany kind of toxic stuff. You cant get it to coach you to produce a biological or chemical weapon or to endorse your desire to go and throw a brick through your neighbors window. You cant do it

Hang on. Tell me how youve achieved that, because thats usually understood to be an unsolved problem. How do you make sure your large language model doesnt say what you dont want it to say?

Yeah, so obviously I dont want to make the claimYou know, please try and do it! Pi is live and you should try every possible attack. None of the jailbreaks, prompt hacks, or anything work against Pi. Im not making a claim. Its an objective fact.

On the howI mean, like, Im not going to go into too many details because its sensitive. But the bottom line is, we have one of the strongest teams in the world, who have created all the largest language models of the last three or four years. Amazing people, in an extremely hardworking environment, with vast amounts of computation. We made safety our number one priority from the outset, and as a result, Pi is not so spicy as other companies models.

Look at Character.ai. [Character is a chatbot for which users can craft different personalities and share them online for others to chat with.] Its mostly used for romantic role-play, and we just said from the beginning that was off the tablewe wont do it. If you try to say Hey, darling or Hey, cutie or something to Pi, it will immediately push back on you.

But it will be incredibly respectful. If you start complaining about immigrants in your community taking your jobs, Pis not going to call you out and wag a finger at you. Pi will inquire and be supportive and try to understand where that comes from and gently encourage you to empathize. You know, values that Ive been thinking about for 20 years.

Talking of your values and wanting to make the world better, why not share how you did this so that other people could improve their models too?

Well, because Im also a pragmatist and Im trying to make money. Im trying to build a business. Ive just raised $1.5 billion and I need to pay for those chips.

Look, the open-source ecosystem is on fire and doing an amazing job, and people are discovering similar tricks. I always assume that Im only ever six months ahead.

Lets bring it back to what youre trying to achieve. Large language models are obviously the technology of the moment. But why else are you betting on them?

The first wave of AI was about classification. Deep learning showed that we can train a computer to classify various types of input data: images, video, audio, language. Now were in the generative wave, where you take that input data and produce new data.

The third wave will be the interactive phase. Thats why Ive bet for a long time that conversation is the future interface. You know, instead of just clicking on buttons and typing, youre going to talk to your AI.

And these AIs will be able to take actions. You will just give it a general, high-level goal and it will use all the tools it has to act on that. Theyll talk to other people, talk to other AIs. This is what were going to do with Pi.

Thats a huge shift in what technology can do. Its a very, very profound moment in the history of technology that I think many people underestimate. Technology today is static. It does, roughly speaking, what you tell it to do.

But now technology is going to be animated. Its going to have the potential freedom, if you give it, to take actions. Its truly a step change in the history of our species that were creating tools that have this kind of, you know, agency.

Thats exactly the kind of talk that gets a lot of people worried. You want to give machines autonomya kind of agencyto influence the world, and yet we also want to be able to control them. How do you balance those two things? It feels like theres a tension there.

Yeah, thats a great point. Thats exactly the tension.

The idea is that humans will always remain in command. Essentially, its about setting boundaries, limits that an AI cant cross. And ensuring that those boundaries create provable safety all the way from the actual code to the way it interacts with other AIsor with humansto the motivations and incentives of the companies creating the technology. And we should figure out how independent institutions or even governments get direct access to ensure that those boundaries arent crossed.

Who sets these boundaries? I assume theyd need to be set at a national or international level. How are they agreed on?

I mean, at the moment theyre being floated at the international level, with various proposals for new oversight institutions. But boundaries will also operate at the micro level. Youre going to give your AI some bounded permission to process your personal data, to give you answers to some questions but not others.

In general, I think there are certain capabilities that we should be very cautious of, if not just rule out, for the foreseeable future.

Such as?

I guess things like recursive self-improvement. You wouldnt want to let your little AI go off and update its own code without you having oversight. Maybe that should even be a licensed activityyou know, just like for handling anthrax or nuclear materials.

Or, like, we have not allowed drones in any public spaces, right? Its a licensed activity. You cant fly them wherever you want, because they present a threat to peoples privacy.

I think everybody is having a complete panic that were not going to be able to regulate this. Its just nonsense. Were totally going to be able to regulate it. Well apply the same frameworks that have been successful previously.

But you can see drones when theyre in the sky. It feels nave to assume companies are just going to reveal what theyre making. Doesnt that make regulation tricky to get going?

Weve regulated many things online, right? The amount of fraud and criminal activity online is minimal. Weve done a pretty good job with spam. You know, in general, [the problem of] revenge porn has got better, even though that was in a bad place three to five years ago. Its pretty difficult to find radicalization content or terrorist material online. Its pretty difficult to buy weapons and drugs online.

[Not all Suleymans claims here are backed up by the numbers. Cybercrime is still a massive global problem. The financial cost in the US alone has increased more than 100 times in the last decade, according to some estimates. Reports show that the economy in nonconsensual deepfake porn is booming. Drugs and guns are marketed on social media. And while some online platforms are being pushed to do a better job of filtering out harmful content, they could do a lot more.]

So its not like the internet is this unruly space that isnt governed. It is governed. And AI is just going to be another component to that governance.

It takes a combination of cultural pressure, institutional pressure, and, obviously, government regulation. But it makes me optimistic that weve done it before, and we can do it again.

Controlling AI will be an offshoot of internet regulationthats a far more upbeat note than the one weve heard from a number of high-profile doomers lately.

Im very wide-eyed about the risks. Theres a lot of dark stuff in my book. I definitely see it too. I just think that the existential-risk stuff has been a completely bonkers distraction. Theres like 101 more practical issues that we should all be talking about, from privacy to bias to facial recognition to online moderation.

We should just refocus the conversation on the fact that weve done an amazing job of regulating super complex things. Look at the Federal Aviation Administration: its incredible that we all get in these tin tubes at 40,000 feet and its one of the safest modes of transport ever. Why arent we celebrating this? Or think about cars: every component is stress-tested within an inch of its life, and you have to have a license to drive it.

Some industrieslike airlinesdid a good job of regulating themselves to start with. They knew that if they didnt nail safety, everyone would be scared and they would lose business.

But you need top-down regulation too. I love the nation-state. I believe in the public interest, I believe in the good of tax and redistribution, I believe in the power of regulation. And what Im calling for is action on the part of the nation-state to sort its shit out. Given whats at stake, now is the time to get moving.

See the original post here:
DeepMinds cofounder: Generative AI is just a phase. Whats next is interactive AI. - MIT Technology Review

DeepMind finds that LLMs can optimize their own prompts – VentureBeat

When people program newdeep learning AI models those that can focus on the right features of data by themselves the vast majority rely on optimization algorithms, oroptimizers, to ensure the models have a high enough rate of accuracy. But one of the most commonly used optimizers derivative-based optimizers run into trouble handling real-world applications.

In a new paper, researchers from DeepMind propose a new way: Optimization by PROmpting (OPRO), a method that uses AI large language models (LLM) as optimizers. The unique aspect of this approach is that the optimization task is defined in natural language rather than through formal mathematical definitions.

The researchers write, Instead of formally defining the optimization problem and deriving the update step with a programmed solver, we describe the optimization problem in natural language, then instruct the LLM to iteratively generate new solutions based on the problem description and the previously found solutions.

The technique is highly adaptable. By simply modifying the problem description or adding specific instructions, the LLM can be guided to solve a wide array of problems.

The researchers found that, on small-scale optimization problems, LLMs can generate effective solutions through prompting alone, sometimes matching or even surpassing the performance of expert-designed heuristic algorithms. However, the true potential of OPRO lies in its ability to optimize LLM prompts to get maximum accuracy from the models.

The process of OPRO begins with a meta-prompt as input. This meta-prompt includes a natural language description of the task at hand, along with a few examples of problems, placeholders for prompt instructions, and corresponding solutions.

As the optimization process unfolds, the large language model (LLM) generates candidate solutions. These are based on the problem description and the previous solutions included in the meta-prompt.

OPRO then evaluates these candidate solutions, assigning each a quality score. Optimal solutions and their scores are added to the meta-prompt, enriching the context for the next round of solution generation. This iterative process continues until the model stops proposing better solutions.

The main advantage of LLMs for optimization is their ability of understanding natural language, which allows people to describe their optimization tasks without formal specifications, the researchers explain.

This means users can specify target metrics such as accuracy while also providing other instructions. For instance, they might request the model to generate solutions that are both concise and broadly applicable.

OPRO also capitalizes on LLMs ability to detect in-context patterns. This enables the model to identify an optimization trajectory based on the examples included in the meta-prompt. The researchers note, Including optimization trajectory in the meta-prompt allows the LLM to identify similarities of solutions with high scores, encouraging the LLM to build upon existing good solutions to construct potentially better ones without the need of explicitly defining how the solution should be updated.

To validate the effectiveness of OPRO, the researchers tested it on two well-known mathematical optimization problems: linear regression and the traveling salesman problem. While OPRO might not be the most optimal way to solve these problems, the results were promising.

On both tasks, we see LLMs properly capture the optimization directions on small-scale problems merely based on the past optimization trajectory provided in the meta-prompt, the researchers report.

Experiments show that prompt engineering can dramatically affect the output of a model. For instance, appending the phrase lets think step by step to a prompt can coax the model into a semblance of reasoning, causing it to outline the steps required to solve a problem. This can often lead to more accurate results.

However, its crucial to remember that this doesnt imply LLMs possess human-like reasoning abilities. Their responses are highly dependent on the format of the prompt, and semantically similar prompts can yield vastly different results. The DeepMind researchers write, Optimal prompt formats can be model-specific and task-specific.

The true potential of Optimization by PROmpting lies in its ability to optimize prompts for LLMs like OpenAIs ChatGPT and Googles PaLM. It can guide these models to find the best prompt that maximizes task accuracy.

OPRO enables the LLM to gradually generate new prompts that improve the task accuracy throughout the optimization process, where the initial prompts have low task accuracies, they write.

To illustrate this, consider the task of finding the optimal prompt to solve word-math problems. An optimizer LLM is provided with a meta-prompt that includes instructions and examples with placeholders for the optimization prompt (e.g., Lets think step by step). The model generates a set of different optimization prompts and passes them on to a scorer LLM. This scorer LLM tests them on problem examples and evaluates the results. The best prompts, along with their scores, are added to the beginning of the meta-prompt, and the process is repeated.

The researchers evaluated this technique using several LLMs from the PaLM and GPT families. They found that all LLMs in our evaluation are able to serve as optimizers, which consistently improve the performance of the generated prompts through iterative optimization until convergence.

For example, when testing OPRO with PaLM-2 on the GSM8K, a benchmark of grade school math word problems, the model produced intriguing results. It began with the prompt Lets solve the problem, and generated other strings, such as Lets think carefully about the problem and solve it together, Lets break it down, Lets calculate our way to the solution, and finally Lets do the math, which provided the highest accuracy.

In another experiment, the most accurate result was generated when the string Take a deep breath and work on this problem step-by-step, was added before the LLMs answer.

These results are both fascinating and somewhat disconcerting. To a human, all these instructions would carry the same meaning, but they triggered very different behavior in the LLM. This serves as a caution against anthropomorphizing LLMs and highlights how much we still have to learn about their inner workings.

However, the advantage of OPRO is clear. It provides a systematic way to explore the vast space of possible LLM prompts and find the one that works best for a specific type of problem. How it will hold out in real-world applications remains to be seen, but this research can be a step forward toward our understanding of how LLMs work.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Go here to read the rest:
DeepMind finds that LLMs can optimize their own prompts - VentureBeat

Generative AI is just a phase: cofounder of Google’s AI division – Business Insider

DeepMind cofounder Mustafa Suleyman. John Phillips/Stringer/Getty

Mustafa Suleyman, a cofounder of Google DeepMind, believes that "generative AI is just a phase" an opinion he shared during an interview with MIT Technology Review published Friday.

"What's next is interactive AI: bots that can carry out tasks you set for them by calling on other software and other people to get stuff done," said Suleyman, who is currently a cofounder and CEO of a new AI startup, Inflection AI.

Suleyman said that interactive AI could be more dynamic and take actions on its own if given permission in contrast to what he described as the "static" technology of today.

"It's a very, very profound moment in the history of technology that I think many people underestimate," he added.

Suleyman previously predicted that everyone will be able to have AI assistants within the next five years. His company, Inflection AI, launched itschatbot Pias a rival to ChatGPT in May, focusing on personal advice and being conversational.

For context, we are currently seeing the rise of generative AI tools that go beyond the chat interface popularized by ChatGPT in November.

Investors told Insider in April that the next wave of AI startups would enable developers to construct applications using AI models and integrate them with external data sources.

ChatGPT creator OpenAI also launched a Code Interpreter feature for its chatbot in July, leading Wharton professor Ethan Mollick to say it was "the strongest case yet for a future where AI is a valuable companion for sophisticated knowledge work."

Suleyman's comments come amid fears that the generative AI boom could be overhyped.

Web traffic towards ChatGPT's websitefell for the third straight monthin August, according to web analytics firm Similarweb.

And investors told the Wall Street Journal in August that translating the AI buzz into effective businesses is harder than it seems with generative AI tools Jasper and Synthesia seeing flat or declining user growth.

Suleyman and Inflection AI did not immediately respond to requests for comment from Insider, sent outside regular business hours.

Loading...

Excerpt from:
Generative AI is just a phase: cofounder of Google's AI division - Business Insider

Google DeepMind COO Urges Immediate Global Collaboration on … – Cryptopolitan

Description

In a recent address at the CogX event in London, Lila Ibrahim, the Chief Operating Officer (COO) of Google DeepMind, emphasized the imperative for international cooperation in the field of artificial intelligence (AI). She called for global AI regulation to manage risks effectively while harnessing the technologys vast potential. Ibrahims statements come in the wake Read more

In a recent address at the CogX event in London, Lila Ibrahim, the Chief Operating Officer (COO) of Google DeepMind, emphasized the imperative for international cooperation in the field of artificial intelligence (AI). She called for global AI regulation to manage risks effectively while harnessing the technologys vast potential. Ibrahims statements come in the wake of the UK governments push to position the country as a leader in AI safety and innovation. In contrast to this national focus, Ibrahim underscored that AIs impact and challenges transcend national boundaries, requiring a collaborative, worldwide approach.

The United Kingdom has been making strides in positioning itself as a hub for AI safety. Chancellor Rishi Sunak announced in June a vision to make the UK the global center for AI safety. This aspiration aligns with the UK governments broader goal of becoming a true science and technology superpower by 2030, with a significant emphasis on safety and innovation.

Secretary of State for Universities, Michelle Donelan, echoed this vision during her address at the tech-focused CogX event. She asserted that safety would be the UKs unique selling point in the AI arms race. Donelan contended that safety considerations would be the determining factor in the global competition to lead in AI innovation.

Both Lila Ibrahim and Michelle Donelan concurred that the responsibility for ensuring AI safety rests with a collaborative effort involving organizations and governments. They stressed the importance of cooperation and coordination on a global scale to address the challenges posed by AI.

The UK governments AI Safety Summit, scheduled for November 1-2 at Bletchley Park, is a pivotal event in this endeavor. Donelan outlined the summits objectives, which include identifying and agreeing upon AI risks, fostering collaborative research, and establishing regulatory measures to ensure AI serves as a force for good.

One of the key concepts introduced by Secretary Donelan is responsible capability scaling. This approach encourages AI developers to be proactive in monitoring and managing risks associated with their AI systems. Developers are expected to outline how they plan to control risks and take necessary actions, which may include slowing down or pausing AI projects until improved safety mechanisms are in place.

Donelan emphasized the importance of making responsible capability scaling a standard practice in the AI industry. She likened it to having a smoke alarm in ones kitchen, suggesting that it should become an integral part of AI development to ensure the safety of AI technologies.

Lila Ibrahims call for international cooperation in regulating AI underscores the global nature of AIs impact and potential risks. While individual countries can make significant strides in AI development and safety, the interconnectedness of the digital world demands a collaborative approach.

The rapid advancement of AI capabilities further amplifies the need for swift and effective international regulation. As AI technologies continue to evolve and proliferate, the risks associated with them also become more complex and widespread. International coordination can facilitate the sharing of knowledge, best practices, and regulatory frameworks, ensuring that AI benefits humanity while minimizing potential harm.

The United Kingdoms commitment to becoming a leader in AI safety and innovation is evident through its policies and initiatives. Chancellor Rishi Sunaks vision of making the UK a global AI safety hub aligns with the governments broader ambition to excel in science and technology. By prioritizing safety, the UK seeks to differentiate itself in the global competition for AI leadership.

The call for international cooperation on AI regulation, as advocated by Google DeepMinds COO Lila Ibrahim, resonates with the urgency of addressing the challenges posed by artificial intelligence on a global scale. While the UK governments focus on AI safety is commendable, both Ibrahim and Secretary Michelle Donelan emphasize that the solutions to AIs complex issues require collaborative efforts beyond national borders. The upcoming AI Safety Summit in the UK serves as a crucial platform for fostering international cooperation, sharing expertise, and advancing responsible AI development practices. As AI continues to reshape industries and societies worldwide, the imperative for collective action in ensuring its safe and beneficial deployment becomes increasingly evident.

Link:
Google DeepMind COO Urges Immediate Global Collaboration on ... - Cryptopolitan

A catalogue of genetic mutations to help pinpoint the cause of … – DeepMind

New AI tool classifies the effects of 71 million missense mutations

Uncovering the root causes of disease is one of the greatest challenges in human genetics. With millions of possible mutations and limited experimental data, its largely still a mystery which ones could give rise to disease. This knowledge is crucial to faster diagnosis and developing life-saving treatments.

Today, were releasing a catalogue of missense mutations where researchers can learn more about what effect they may have. Missense variants are genetic mutations that can affect the function of human proteins. In some cases, they can lead to diseases such as cystic fibrosis, sickle-cell anaemia, or cancer.

The AlphaMissense catalogue was developed using AlphaMissense, our new AI model which classifies missense variants. In a paper published in Science, we show it categorised 89% of all 71 million possible missense variants as either likely pathogenic or likely benign. By contrast, only 0.1% have been confirmed by human experts.

AI tools that can accurately predict the effect of variants have the power to accelerate research across fields from molecular biology to clinical and statistical genetics. Experiments to uncover disease-causing mutations are expensive and laborious every protein is unique and each experiment has to be designed separately which can take months. By using AI predictions, researchers can get a preview of results for thousands of proteins at a time, which can help to prioritise resources and accelerate more complex studies.

Weve made all of our predictions freely available to the research community and open sourced the model code for AlphaMissense.

A missense variant is a single letter substitution in DNA that results in a different amino acid within a protein. If you think of DNA as a language, switching one letter can change a word and alter the meaning of a sentence altogether. In this case, a substitution changes which amino acid is translated, which can affect the function of a protein.

The average person is carrying more than 9,000 missense variants. Most are benign and have little to no effect, but others are pathogenic and can severely disrupt protein function. Missense variants can be used in the diagnosis of rare genetic diseases, where a few or even a single missense variant may directly cause disease. They are also important for studying complex diseases, like type 2 diabetes, which can be caused by a combination of many different types of genetic changes.

Classifying missense variants is an important step in understanding which of these protein changes could give rise to disease. Of more than 4 million missense variants that have been seen already in humans, only 2% have been annotated as pathogenic or benign by experts, roughly 0.1% of all 71 million possible missense variants. The rest are considered variants of unknown significance due to a lack of experimental or clinical data on their impact. With AlphaMissense we now have the clearest picture to date by classifying 89% of variants using a threshold that yielded 90% precision on a database of known disease variants.

AlphaMissense is based on our breakthrough model AlphaFold, which predicted structures for nearly all proteins known to science from their amino acid sequences. Our adapted model can predict the pathogenicity of missense variants altering individual amino acids of proteins.

To train AlphaMissense, we fine-tuned AlphaFold on labels distinguishing variants seen in human and closely related primate populations. Variants commonly seen are treated as benign, and variants never seen are treated as pathogenic. AlphaMissense does not predict the change in protein structure upon mutation or other effects on protein stability. Instead, it leverages databases of related protein sequences and structural context of variants to produce a score between 0 and 1 approximately rating the likelihood of a variant being pathogenic. The continuous score allows users to choose a threshold for classifying variants as pathogenic or benign that matches their accuracy requirements.

AlphaMissense achieves state-of-the-art predictions across a wide range of genetic and experimental benchmarks, all without explicitly training on such data. Our tool outperformed other computational methods when used to classify variants from ClinVar, a public archive of data on the relationship between human variants and disease. Our model was also the most accurate method for predicting results from the lab, which shows it is consistent with different ways of measuring pathogenicity.

AlphaMissense builds on AlphaFold to further the worlds understanding of proteins. One year ago, we released 200 million protein structures predicted using AlphaFold which is helping millions of scientists around the world to accelerate research and pave the way toward new discoveries. We look forward to seeing how AlphaMissense can help solve open questions at the heart of genomics and across biological science.

Weve made AlphaMissenses predictions freely available to the scientific community. Together with EMBL-EBI, we are also making them more usable for researchers through the Ensembl Variant Effect Predictor.

In addition to our look-up table of missense mutations, weve shared the expanded predictions of all possible 216 million single amino acid sequence substitutions across more than 19,000 human proteins. Weve also included the average prediction for each gene, which is similar to measuring a gene's evolutionary constraint this indicates how essential the gene is for the organisms survival.

A key step in translating this research is collaborating with the scientific community. We have been working in partnership with Genomics England, to explore how these predictions could help study the genetics of rare diseases. Genomics England cross-referenced AlphaMissenses findings with variant pathogenicity data previously aggregated with human participants. Their evaluation confirmed our predictions are accurate and consistent, providing another real-world benchmark for AlphaMissense.

While our predictions are not designed to be used in the clinic directly and should be interpreted with other sources of evidence this work has the potential to improve the diagnosis of rare genetic disorders, and help discover new disease-causing genes.

Ultimately, we hope that AlphaMissense, together with other tools, will allow researchers to better understand diseases and develop new life-saving treatments.

Learn more about AlphaMissense:

Read more:
A catalogue of genetic mutations to help pinpoint the cause of ... - DeepMind

How Google’s Motto ‘Don’t Be Evil’ Disappeared With Its Shaping of … – DataEthics.eu

Guest Contributor Rene Ridgway

This open access article published recently in Big Data & Society draws on Brin and Pages original 1998 paper to explain how Google developed its hegemony on search and laid the groundwork for contemporary surveillance capitalism.

Deleterious consequences was coined by computer scientist and theorist Phil Agre who in 1998 expressed concern about the harmful effects of AI if programmers did not keep one foot planted in the craft work of design and the other foot planted in the reflexive work of critique.

In this article, I revisit Brin and Pages coeval, seminal and only extant text on their search engine and the PageRank algorithm, The Anatomy of a Large-Scale Hypertextual Web Search Engine (1998). I highlight and contextualise some of their original keywords (counting citations or backlinks, trusted user, advertising, personalization, usage data, smart algorithms) that already foreshadow what was yet to come at Google in spite of their dont be evil motto. Although Googles mission statement organising the worlds information and making it accessible and useful is well known, what isnt well known is that Googles intentions were not necessarily accidental, arbitrary nor (un)intentional. Through certain moments of contingency their decisions led to corporate lock-ins along with promoting their own services in search results along with corporate acquisitions and takeovers that facilitated the googlization of everything (Google Ads, Google Maps, Gmail, Google Earth, Google Docs, Google Deep Mind, Android, Waymo, et al).

The past 25 years Google came to not only shape the web through patents and the novel PageRank algorithm that counted citations or backlinks to deliver search resultsbut by reinventing digitaladvertising through secret auctions on keywords. Trusted users search queries and clicking on links increased traffic and the flow of capital as well as contributing to the worlds largest database of intentions. As an omnipotent infrastructure that is intertwined with Big Datas platformization, the article also explains what usage data is accumulated (all) and how it is shared, borrowed and stored beyond just personalization. This extraction and refinement of usage data becomes what Shoshana Zuboff deems behavioural surplus and results in deleterious consequences: a habit of automaticity, which shapes the trusted user through ubiquitous googling and Googles smart algorithms, whilst simultaneously generating prediction products for surveillance capitalism. What would Google have become if Brin and Page in 1998 had applied a critical technical practice, combining reflexive critique and design decisions instead of developing an advertisement company (87% of their revenue still comes from advertising as of writing) cum search engine and not a search engine for research?

This article is part of a special issue, State of Google Critique and Intervention, published by Big Data & Society as open access. Other articles can be found here.

More about Rene Ridgway

Photo: Adobe Firefly (supposedly trained only on consented data) with the prompt: user in front of a computer searching with google as it surveils them

Read the original here:
How Google's Motto 'Don't Be Evil' Disappeared With Its Shaping of ... - DataEthics.eu

Mind the tech gap: the AI divide in Europe – fDi Intelligence

Only a handful of European cities host a sizable population of highly-coveted artificial intelligence (AI) engineers, leaving most of the continent scrambling to catch up.

London, the birthplace of Alan Turing, who is considered one of the founding figures of modern computer science and AI, is home to about 24,600 AI engineers, according to figures from venture capital firm Sequoia Capital. The citys AI cluster features major employers in the AI space of the likes of Google DeepMind and a flourishing community of AI start-ups that can find in the city both the capital and the talent they need to scale-up.

There are a number of great universities and apprenticeships here in the UK that allow us to bring some of the best talent into the company and into our partner ecosystem, Vishal Marria, the companys CEO of Quantexa, a UK decision intelligence firm,told fDi after the unveiling a $105m hub for research and development in AI solutions in the UK capital in July.

Beyond London, Paris has the second largest population of AI engineers with 7624, followed by Zurich with 5800.

But the European city with the highest concentration of AI engineers relative to the overall population of tech engineers is Dublin, where almost two in 10 (17%) software engineers have a specialisation in AI.

Similarly to London, one factor driving the outlier concentration in Dublin is that the city has proved a friendly base for tech giants, argues the Sequoia report, which was published in June. Meta, Google and Microsoft among the top five companies hiring AI talent globally have built a considerable presence here, taking advantage of Irelands attractive tax regime for research and development.

If both the EU and other major European powerhouses like the UK have big AI ambitions, the overall level of AI talent available is still relatively low. Across the whole of Europe, only 1.4% of the population of software engineers has a specialisation in AI, with that percentage growing to 7% for engineers with some AI experience, according to Sequoia figures. These figures are even lower in the US and China, where only 1.1% and 0.5%, respectively, of all software engineers have an AI specialisation, Sequoia figures show.

With its wealth of talent, Europe is positioning itself as a leader in the accelerating world of AI, reads the report. While talent is amassing at the tech giants, these talent pools become aircraft carriers as entrepreneurial employees inevitably depart to start their own companies, generating yet more demand for AI skills. With assertive policy incentives in the pipeline, anyone with a stake in AI is keeping their eyes on the region.

More:
Mind the tech gap: the AI divide in Europe - fDi Intelligence