Page 845«..1020..844845846847..850860..»

There was a bank row? – ChessBase

Recently I had one of my favourite guests over. Leon Mendonca (pronounced Men-don-SA, incidentally) and his father Lyndon stayed with usfor a week, and we had such a great time playing Wordle, Geoguessr, with the kid joining basketball matches at the local school. During such visits we have long and interesting non-chess conversations, and Leon, like many other super-talents, has been confronted with a large number of my logical pranks. But this time I had a different kind of puzzle for him.

Leon knows that I spent part of my early childhood in a hill station resort in Lonavala, India (my German father had set up a herpetological research laboratoryin the jungles surrounding the town). Many British families lived in the villas in Lonavala, and we had friends close by the family O'Connell.

The Keymer Variation - 1.Nf3 d5 2.e3

This video course features the ins-and-outs of the possible setups Black can choose. Youll learn the key concepts and strategies needed to add this fantastic opening to your repertoire. An easy-to-learn and yet venomous weapon.

Here's the puzzle I gave Leon and his father.

My English aunt Rosie O'Connell, livingin the villa in Lonavala, often used tosay "There was a bank row". To whom and why?

Leon and Lyndon could not work it out, and after a few days gave up. So I told them the solution, and had the boy rolling on the floor in laughter. After that, I said "There was a bank row" a number of times to him, and he complied! That's a hint.

Naturally I gave the problem to my usual customers, 2600+ and stronger super-talents. One was Gukesh, who to my delight has now, at the age of 17,climbed into the world's top ten bracket. He could not solve it, so I instructed Leon to give him the puzzle again while they were playing in the Turkish League. And he could give him the solution under one condition: he mustfilm Gukesh's reaction. With a little help, Gukesh actually solved the puzzle in their hotel room. The front page thumbnail is from Leon's video of him doing so.

Okay, what is the solution already? Well, I'm not going to tell you now. I will do it in a few days.Mind you, the vast majority of our readers doesn't have the slightest chance to work it out! There is, however, one group of readers that could,and they will probably react the same way Leon and Gukesh did.

I am switching comments on but please do not reveal the solution, if you know it, to other readers. I will do that with a wonderful 2700+ video very soon.

Attack like a Super Grandmaster

In this Fritztrainer: Attack like a Super GM with Gukesh we touch upon all aspects of his play, with special emphasis on how you can become a better attacking player.

The rest is here:
There was a bank row? - ChessBase

Read More..

AlphaFold tool pinpoints protein mutations that cause disease – Nature.com

A patient receives treatment for cystic fibrosis, a disease linked in some cases to missense mutations.Credit: Burger/Phanie/Science Photo Library

Google DeepMind has wielded its revolutionary protein-structure-prediction AI in the hunt for genetic mutations that cause disease.

A new tool based on the AlphaFold network can accurately predict which mutations in proteins are likely to cause health conditions a challenge that limits the use of genomics in healthcare.

The AI network called AlphaMissense is a step forward, say researchers who are developing similar tools, but not necessarily a sea change. It is one of many techniques in development that aim to help researchers, and ultimately physicians, to interpret peoples genomes to find the cause of a disease. But tools such as AlphaMissense which is described in a 19 September paper in Science1 will need to undergo thorough testing before they are used in the clinic.

A Pandoras box: map of protein-structure families delights scientists

Many of the genetic mutations that directly cause a condition, such as those responsible for cystic fibrosis and sickle-cell disease, tend to change the amino acid sequence of the protein they encode. But researchers have observed only a few million of these single-letter missense mutations. Of the more than 70 million possible in the human genome, only a sliver have been conclusively linked to disease, and most seem to have no ill effect on health.

So when researchers and doctors find a missense mutation theyve never seen before, it can be difficult to know what to make of it. To help interpret such variants of unknown significance, researchers have developed dozens of different computational tools that can predict whether a variant is likely to cause disease. AlphaMissense incorporates existing approaches to the problem, which are increasingly being addressed with machine learning.

The network is based on AlphaFold, which predicts a protein structure from an amino-acid sequence. But instead of determining the structural effects of a mutation an open challenge in biology AlphaMissense uses AlphaFolds intuition about structure to identify where disease-causing mutations are likely to occur within a protein, Pushmeet Kohli, DeepMinds vice-president of Research and a study author, said at a press briefing.

AlphaMissense also incorporates a type of neural network inspired by large language models like ChatGPT that has been trained on millions of protein sequences instead of words, called a protein language model. These have proven adept at predicting protein structures and designing new proteins. They are useful for variant prediction because they have learned which sequences are plausible and which are not, iga Avsec, the DeepMind research scientist who co-led the study, told journalists.

Foldseek gives AlphaFold protein database a rapid search tool

DeepMinds network seems to outperform other computational tools at discerning variants known to cause disease from those that dont. It also does well at spotting problem variants identified in laboratory experiments that measure the effects of thousands of mutations at once. The researchers also used AlphaMissense to create a catalogue of every possible missense mutation in the human genome, determining that 57% are likely to be benign and that 32% may cause disease.

AlphaMissense is an advance over existing tools for predicting the effects of mutations, but not a gigantic leap forward, says Arne Eloffson, a computational biologist at the University of Stockholm.

Its impact wont be as significant as AlphaFold, which ushered in a new era in computational biology, agrees Joseph Marsh, a computational biologist at the MRC Human Genetics Unit in Edinburgh, UK. Its exciting. Its probably the best predictor we have right now. But will it be the best predictor in two or three years? Theres a good chance it wont be.

Computational predictions currently have a minimal role in diagnosing genetic diseases, says Marsh, and recommendations from physicians groups say that these tools should provide only supporting evidence in linking a mutation to a disease. AlphaMissense confidently classified a much larger proportion of missense mutations than have previous methods, says Avsec. As these models get better than I think people will be more inclined to trust them.

Yana Bromberg, a bioinformatician at Emory University in Atlanta, Georgia, emphasizes that tools such as AlphaMissense must be rigorously evaluated using good performance metrics before ever being applied in the real-world.

For example, an exercise called the Critical Assessment of Genome Interpretation (CAGI) has benchmarked the performance of such prediction methods for years against experimental data that has not yet been released. Its my worst nightmare to think of a doctor taking a prediction and running with it, as if its a real thing, without evaluation by entities such as CAGI, Bromberg adds.

Read more:
AlphaFold tool pinpoints protein mutations that cause disease - Nature.com

Read More..

Google DeepMind Researcher Discusses Emergence of Reasoning in AI at Harvard ML Foundations Talk | News – Harvard Crimson

Google researcher Denny Zhou discussed the emergence of reasoning in large language models at a Harvard Machine Learning Foundations talk Friday afternoon at the Science and Engineering Complex.

Zhou, the founder and lead of the reasoning team for Google DeepMind, outlined the way he trains emerging artificial intelligence LLMs to shrink the gap between machine and human intelligence.

The lecture, titled Teach Language Models to Reason, is one of several seminars hosted by the Harvard Machine Learning Foundations Group, comprised of faculty, graduate students, and postdoctoral fellows at the University who research the topic of machine learning.

Zhou talked about his approach to investigating reasoning in AI technology, which he started five years ago.

The first thing I tried was to combine deep-learning models with, first of all, a lot of neurological machines, he said.

The approach is composed of four elements: chain-of-thought, or adding thoughts before a final answer; self-consistency, sampling repeatedly and selecting the most frequent answer; least-to-most, breaking down problems into different parts and solving them individually; and instruction finetuning, calibrating an AI to assess new problems without training.

Though good ideas have really amazing performance, Zhou said AI still has a long way to go in comparison to human thinking.

Zhou said he is unconvinced that AIs integration into modern society will live up to its expectations. Though some might say that super intelligence will emerge in five or 10 years, Zhou said, I just want to see a self-driving car coming in 10 years, and I cannot imagine that in this moment.

In a post-talk interview, Zhou elaborated that the AI technology behind self-driving cars would be very difficult to scale up because AI data needed for training models are specific to each city, so training models would need to collect data from different cities.

Human intelligence, Zhou said, still supersedes AI capabilities.

Humans are humans. If you know how to drive cars in one city, you have no problem to drive cars in other cities, he said. That is very different from the kinds of techniques used to do self-driving cars.

Zhou shared his hopes for the development of LLMs with reasoning capabilities and their contributions to human society.

I expect lots of AI models will greatly improve our experience of using different softwares, he said.

He cited ChatGPTs ability to write better text and larger models capacity to help write code.

Larger models will make our world more productive, Zhou said.

Staff writer Camilla J. Martinez can be reached at camilla.martinez@thecrimson.com.

Staff writer Tiffani A. Mezitis can be reached at tiffani.mezitis@thecrimson.com.

Here is the original post:
Google DeepMind Researcher Discusses Emergence of Reasoning in AI at Harvard ML Foundations Talk | News - Harvard Crimson

Read More..

ChatGPT AI is about to be eclipsed by ‘interactive AI’ – The Independent

For free real time breaking news alerts sent straight to your inbox sign up to our breaking news emailsSign up to our free breaking news emails

The current wave of generative AI tools like ChatGPT will soon be surpassed by interactive artificial intelligence, according to AI pioneer Mustafa Suleyman.

The co-founder of DeepMind, which was acquired by Google for $500 million in 2014, said the next generation of AI tools will be a step change in the history of our species, allowing people to not just obtain information but also order tasks and services to be carried out on their behalf.

The first wave of AI was about classification. Deep learning showed that we can train a computer to classify various types of input data: images, video, audio, language. Now were in the generative wave, where you take that input data and produce new data, Mr Suleyman told MIT Technology Review.

The third wave will be the interactive phase. Thats why Ive bet for a long time that conversation is the future interface. You know, instead of just clicking on buttons and typing, youre going to talk to your AI.

This will allow users to ask these AI to perform tasks for them, which they will carry out by talking with other people and interacting with other AIs.

Thats a huge shift in what technology can do. Its a very, very profound moment in the history of technology that I think many people underestimate, he said.

Technology today is static. It does, roughly speaking, what you tell it to do. But now technology is going to be animated. Its going to have the potential freedom, if you give it, to take actions. Its truly a step change in the history of our species that were creating tools that have this kind of, you know, agency.

When questioned about the potential risks of giving artificial intelligence autonomy, Mr Suleyman said it was important to set boundaries for the technology and make sure that it is aligned with human interests.

When Mr Suleyman was still working at DeepMind, his colleagues helped develop what became known as a big red button that would effectively serve as an off switch for rogue AI.

Access unlimited streaming of movies and TV shows with Amazon Prime Video

Sign up now for a 30-day free trial

Access unlimited streaming of movies and TV shows with Amazon Prime Video

Sign up now for a 30-day free trial

A research paper titled Safely Interruptible Agents described how any misbehaving robot could be shut down or overriden by a human operator in order to avoid irreversible consequences.

Read this article:
ChatGPT AI is about to be eclipsed by 'interactive AI' - The Independent

Read More..

DeepMind’s Mustafa Suleyman says this is how to police AI – Business Insider

Mustafa Suleyman, DeepMind cofounder who also founded Inflection AI last year said we should made AI doesn't update code without oversight. Inflection AI

The rapid development of AI has raised questions about whether we're programming our own demise. As AI systems become more powerful, they could pose a greater risk to humanity if AI's goals suddenly stop aligning with ours.

To avoid that kind of doomsday scenario, Mustafa Suleyman, the co-founder of Google's AI division DeepMind, said there are certain capabilities we should rule out when it comes to artificial intelligence.

In a recent interview with the MIT Technology Review, Suleyman suggested that we should rule out "recursive self-improvement," which is the ability for AI to make itself better over time.

"You wouldn't want to let your little AI go off and update its own code without you having oversight," he told the MIT Technology Review. "Maybe that should even be a licensed activity you know, just like for handling anthrax or nuclear materials."

And while there's been a considerable focus on AI regulation at an institutional level just last week, tech execs including Sam Altman, Elon Musk, and Mark Zuckerberg gathered in Washington for a closed-door forum on AI Suleyman added that it's important for people to set limits around how their personal data is used, too.

"Essentially, it's about setting boundaries, limits that an AI can't cross," he told the MIT Technology Review, "and ensuring that those boundaries create provable safety all the way from the actual code to the way it interacts with other AIs or with humans to the motivations and incentives of the companies creating the technology."

Last year, Suleyman cofounded AI startup, Inflection AI, whose chatbot Pi is designed to be a neutral listener and provide emotional support. Suleyman told the MIT Technology Review that though Pi is not "as spicy" as other chatbots it is "unbelievably controllable."

And while Suleyman told the MIT Technology Review he's "optimistic" that AI can be effectively regulated, he doesn't seem to be worried about a singular doomsday event. He told the publication that "there's like 101 more practical issues" we should be focusing on from privacy to bias to facial recognition to online moderation.

Suleyman is just one among several experts in the field sounding off about AI regulation. Demis Hassabis, another DeepMind cofounder, has said that developing artificial general intelligence technologies should be done "in a cautious manner using the scientific method" and involving rigorous experiments and testing.

And Microsoft CEO Satya Nadella has said that the way to avoid "runaway AI" is to make sure we start with using it in categories where humans "unambiguously, unquestionably, are in charge."

Since March, almost 34,000 people including "godfathers of AI" like Geoffrey Hinton and Yoshua Bengio have also signed an open letter from non-profit Future of Life Institute calling for AI labs to pause training on any technology that's more powerful than OpenAI's GPT-4.

Loading...

Read the original:
DeepMind's Mustafa Suleyman says this is how to police AI - Business Insider

Read More..

DeepMind AI can predict if DNA mutations are likely to be harmful – New Scientist

Google DeepMinds AlphaMissense AI can predict whether mutations will affect how proteins such as haemoglobin subunit beta (left) or cystic fibrosis transmembrane conductance regulator (right) will function

Google DeepMind

Artificial intelligence firm Google DeepMind has adapted its AlphaFold system for predicting protein structure to assess whether a huge number of simple mutations are harmful.

The adapted system, called AlphaMissense, has done this for 71 million possible mutations of a kind called missense mutations in the 20,000 human proteins, and the results made freely available.

We think this is very helpful for clinicians and human geneticists, says Jun Cheng at Google DeepMind. Hopefully, this can help them to pinpoint the cause of genetic disease.

Almost everyone is born with between about 50 and 100 mutations not found in their parents, resulting in a huge amount of genetic variation between individuals. For doctors sequencing a persons genome in an attempt to find the cause of a disease, this poses an enormous challenge, because there may be thousands of mutations that could be linked to that condition.

AlphaMissense has been developed to try to predict whether these genetic variants are harmless or might produce a protein linked to a disease.

A protein-coding gene tells a cell which amino acids need to be strung together to make a protein, with each set of three DNA letters coding for an amino acid. The AI focuses on missense mutations, which is when one of the DNA letters in a triplet becomes changed to another letter and can result in the wrong amino acid being added to a protein. Depending on where in the protein this happens, it can result in anything from no effect to a crucial protein no longer working at all.

People tend to have about 9000 missense mutations each. But the effects of only 0.1 per cent of the 71 million possible missense mutations we could get have been identified so far.

AlphaMissense doesnt attempt to work out how a missense mutation alters the structure or stability of a protein, and what effect this has on its interactions with other proteins, although understanding this could help find treatments. Instead, it compares the sequence of each possible mutated protein to those of all the proteins that AlphaFold was trained on to see if it looks natural, says iga Avsec at Google DeepMind. Proteins that look unnatural are rated as potentially harmful on a scale from 0 to 1.

Pushmeet Kohli at Google DeepMind uses the term intuition to describe how it works. In some sense, this model is leveraging the intuition that it had gained while solving the task of structure prediction, he says.

Its like if we substitute a word from an English sentence, a person familiar with English can immediately see whether this word substitution will change the meaning of the sentence, says Avsec.

The team says AlphaMissense outperformed other computational methods when tested on known variants.

In an article commenting on the research, Joseph Marsh at the University of Edinburgh, UK, and Sarah Teichmann at the University of Cambridge write that AlphaMissense produced remarkable results in several different tests of its performance and it will be helpful for prioritising which possible disease-causing mutations should be investigated further.

However, such systems can still only aid in the diagnosis process, they write.

Missense mutations are just one of many different kinds of mutations. Bits of DNA can also be added, deleted, duplicated, flipped around and so on. And many disease-causing mutations dont alter proteins, but instead occur in nearby sequences involved in regulating the activity of genes.

Topics:

See original here:
DeepMind AI can predict if DNA mutations are likely to be harmful - New Scientist

Read More..

DeepMind co-founder predicts "third wave" of AI: machines talking to machines and people – TechSpot

Forward-looking: It looks like the initial hype surrounding generative AI is running out of steam. But according to Mustafa Suleyman, one of the cofounders of DeepMind, generative artificial intelligence is just a phase before the next wave: interactive AI, where machines perform multi-step tasks on their own by talking to other AIs and even people.

Suleyman gave his opinion on the state of AI in an interview with MIT Technology Review last week. He said the first wave of AI was classification, with deep learning classifying types of input data such as images and audio. The second, current AI wave is generative, which takes that input data to produce new data.

The next, third wave, according to Suleyman, will be interactive AI. He says that rather than clicking buttons or typing, users will be talking to their AIs, instructing them to take actions. "You will just give it a general, high-level goal and it will use all the tools it has to act on that. They'll talk to other people, talk to other AIs," Suleyman explained.

It's often argued that generative AI is a bit of a misnomer, seeing as these LLM-powered tools don't show intelligence in the same way as humans or animals. But Suleyman suggests that interactive AI will be closer to the artificial intelligence often seen in sci-fi media. Rather than being "static" like today's technology, phase three AI will be animated, able to carry out its instructions with freedom and agency.

"It's a very, very profound moment in the history of technology that I think many people underestimate," he added.

"Hello, I'm here to help with any tasks that need completing"

Earlier this year, Suleyman's company, Inflection AI, released a rival to ChatGPT called Pi, which he says is nicer, politer, and more focused on being conversational.

While generative AI remains an enterprise-changing, multi-billion-dollar industry, a lot of the hype around the tech has cooled in recent times. Web traffic toward the ChatGPT site is down for the third month in a row, while similar tools have reported flat or declining user growth. However, with the technology expanding in scope and advancing all the time, Suleyman's prediction that a third phase of AI will usher in a new era of technology might be more than just hyperbole. It does sound worryingly a bit like Skynet, though.

Read this article:
DeepMind co-founder predicts "third wave" of AI: machines talking to machines and people - TechSpot

Read More..

Generative AI is just a phase: cofounder of Google’s AI division – Business Insider

DeepMind cofounder Mustafa Suleyman. John Phillips/Stringer/Getty

Mustafa Suleyman, a cofounder of Google DeepMind, believes that "generative AI is just a phase" an opinion he shared during an interview with MIT Technology Review published Friday.

"What's next is interactive AI: bots that can carry out tasks you set for them by calling on other software and other people to get stuff done," said Suleyman, who is currently a cofounder and CEO of a new AI startup, Inflection AI.

Suleyman said that interactive AI could be more dynamic and take actions on its own if given permission in contrast to what he described as the "static" technology of today.

"It's a very, very profound moment in the history of technology that I think many people underestimate," he added.

Suleyman previously predicted that everyone will be able to have AI assistants within the next five years. His company, Inflection AI, launched itschatbot Pias a rival to ChatGPT in May, focusing on personal advice and being conversational.

For context, we are currently seeing the rise of generative AI tools that go beyond the chat interface popularized by ChatGPT in November.

Investors told Insider in April that the next wave of AI startups would enable developers to construct applications using AI models and integrate them with external data sources.

ChatGPT creator OpenAI also launched a Code Interpreter feature for its chatbot in July, leading Wharton professor Ethan Mollick to say it was "the strongest case yet for a future where AI is a valuable companion for sophisticated knowledge work."

Suleyman's comments come amid fears that the generative AI boom could be overhyped.

Web traffic towards ChatGPT's websitefell for the third straight monthin August, according to web analytics firm Similarweb.

And investors told the Wall Street Journal in August that translating the AI buzz into effective businesses is harder than it seems with generative AI tools Jasper and Synthesia seeing flat or declining user growth.

Suleyman and Inflection AI did not immediately respond to requests for comment from Insider, sent outside regular business hours.

Loading...

Excerpt from:
Generative AI is just a phase: cofounder of Google's AI division - Business Insider

Read More..

DeepMind finds that LLMs can optimize their own prompts – VentureBeat

When people program newdeep learning AI models those that can focus on the right features of data by themselves the vast majority rely on optimization algorithms, oroptimizers, to ensure the models have a high enough rate of accuracy. But one of the most commonly used optimizers derivative-based optimizers run into trouble handling real-world applications.

In a new paper, researchers from DeepMind propose a new way: Optimization by PROmpting (OPRO), a method that uses AI large language models (LLM) as optimizers. The unique aspect of this approach is that the optimization task is defined in natural language rather than through formal mathematical definitions.

The researchers write, Instead of formally defining the optimization problem and deriving the update step with a programmed solver, we describe the optimization problem in natural language, then instruct the LLM to iteratively generate new solutions based on the problem description and the previously found solutions.

The technique is highly adaptable. By simply modifying the problem description or adding specific instructions, the LLM can be guided to solve a wide array of problems.

The researchers found that, on small-scale optimization problems, LLMs can generate effective solutions through prompting alone, sometimes matching or even surpassing the performance of expert-designed heuristic algorithms. However, the true potential of OPRO lies in its ability to optimize LLM prompts to get maximum accuracy from the models.

The process of OPRO begins with a meta-prompt as input. This meta-prompt includes a natural language description of the task at hand, along with a few examples of problems, placeholders for prompt instructions, and corresponding solutions.

As the optimization process unfolds, the large language model (LLM) generates candidate solutions. These are based on the problem description and the previous solutions included in the meta-prompt.

OPRO then evaluates these candidate solutions, assigning each a quality score. Optimal solutions and their scores are added to the meta-prompt, enriching the context for the next round of solution generation. This iterative process continues until the model stops proposing better solutions.

The main advantage of LLMs for optimization is their ability of understanding natural language, which allows people to describe their optimization tasks without formal specifications, the researchers explain.

This means users can specify target metrics such as accuracy while also providing other instructions. For instance, they might request the model to generate solutions that are both concise and broadly applicable.

OPRO also capitalizes on LLMs ability to detect in-context patterns. This enables the model to identify an optimization trajectory based on the examples included in the meta-prompt. The researchers note, Including optimization trajectory in the meta-prompt allows the LLM to identify similarities of solutions with high scores, encouraging the LLM to build upon existing good solutions to construct potentially better ones without the need of explicitly defining how the solution should be updated.

To validate the effectiveness of OPRO, the researchers tested it on two well-known mathematical optimization problems: linear regression and the traveling salesman problem. While OPRO might not be the most optimal way to solve these problems, the results were promising.

On both tasks, we see LLMs properly capture the optimization directions on small-scale problems merely based on the past optimization trajectory provided in the meta-prompt, the researchers report.

Experiments show that prompt engineering can dramatically affect the output of a model. For instance, appending the phrase lets think step by step to a prompt can coax the model into a semblance of reasoning, causing it to outline the steps required to solve a problem. This can often lead to more accurate results.

However, its crucial to remember that this doesnt imply LLMs possess human-like reasoning abilities. Their responses are highly dependent on the format of the prompt, and semantically similar prompts can yield vastly different results. The DeepMind researchers write, Optimal prompt formats can be model-specific and task-specific.

The true potential of Optimization by PROmpting lies in its ability to optimize prompts for LLMs like OpenAIs ChatGPT and Googles PaLM. It can guide these models to find the best prompt that maximizes task accuracy.

OPRO enables the LLM to gradually generate new prompts that improve the task accuracy throughout the optimization process, where the initial prompts have low task accuracies, they write.

To illustrate this, consider the task of finding the optimal prompt to solve word-math problems. An optimizer LLM is provided with a meta-prompt that includes instructions and examples with placeholders for the optimization prompt (e.g., Lets think step by step). The model generates a set of different optimization prompts and passes them on to a scorer LLM. This scorer LLM tests them on problem examples and evaluates the results. The best prompts, along with their scores, are added to the beginning of the meta-prompt, and the process is repeated.

The researchers evaluated this technique using several LLMs from the PaLM and GPT families. They found that all LLMs in our evaluation are able to serve as optimizers, which consistently improve the performance of the generated prompts through iterative optimization until convergence.

For example, when testing OPRO with PaLM-2 on the GSM8K, a benchmark of grade school math word problems, the model produced intriguing results. It began with the prompt Lets solve the problem, and generated other strings, such as Lets think carefully about the problem and solve it together, Lets break it down, Lets calculate our way to the solution, and finally Lets do the math, which provided the highest accuracy.

In another experiment, the most accurate result was generated when the string Take a deep breath and work on this problem step-by-step, was added before the LLMs answer.

These results are both fascinating and somewhat disconcerting. To a human, all these instructions would carry the same meaning, but they triggered very different behavior in the LLM. This serves as a caution against anthropomorphizing LLMs and highlights how much we still have to learn about their inner workings.

However, the advantage of OPRO is clear. It provides a systematic way to explore the vast space of possible LLM prompts and find the one that works best for a specific type of problem. How it will hold out in real-world applications remains to be seen, but this research can be a step forward toward our understanding of how LLMs work.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Go here to read the rest:
DeepMind finds that LLMs can optimize their own prompts - VentureBeat

Read More..

DeepMinds cofounder: Generative AI is just a phase. Whats next is interactive AI. – MIT Technology Review

The magazine I worked for at the time was about to publish an article claiming that DeepMind had failed to comply with data protection regulations when accessing records from some 1.6 million patients to set up those collaborationsa claim later backed up by a government investigation. Suleyman couldnt see why we would publish a story that was hostile to his companys efforts to improve health care. As long as he could remember, he told me at the time, hed only wanted to do good in the world.

In the seven years since that call, Suleymans wide-eyed mission hasnt shifted an inch. The goal has never been anything but how to do good in the world, he says via Zoom from his office in Palo Alto, where the British entrepreneur now spends most of his time.

Suleyman left DeepMind and moved to Google to lead a team working on AI policy. In 2022 he founded Inflection, one of the hottest new AI firms around, backed by $1.5 billion of investment from Microsoft, Nvidia, Bill Gates, and LinkedIn founder Reid Hoffman. Earlier this year he released a ChatGPT rival called Pi, whose unique selling point (according to Suleyman) is that it is pleasant and polite. And he just coauthored a book about the future of AI with writer and researcher Michael Bhaskar, called The Coming Wave: Technology, Power, and the 21st Centurys Greatest Dilemma.

Many will scoff at Suleymans brand of techno-optimismeven navet. Some of his claims about the success of online regulation feel way off the mark, for example. And yet he remains earnest and evangelical in his convictions.

Its true that Suleyman has an unusual background for a tech multi-millionaire. When he was 19 he dropped out of university to set up Muslim Youth Helpline, a telephone counseling service. He also worked in local government. He says he brings many of the values that informed those efforts with him to Inflection. The difference is that now he just might be in a position to make the changes hes always wanted tofor good or not.

The following interview has been edited for length and clarity.

Your early career, with the youth helpline and local government work, was about as unglamorous and unSilicon Valley as you can get. Clearly, that stuff matters to you. Youve since spent 15 years in AI and this year cofounded your second billion-dollar AI company. Can you connect the dots?

Ive always been interested in power, politics, and so on. You know, human rights principles are basically trade-offs, a constant ongoing negotiation between all these different conflicting tensions. I could see that humans were wrestling with thatwere full of our own biases and blind spots. Activist work, local, national, international government, et ceteraits all just slow and inefficient and fallible.

Imagine if you didnt have human fallibility. I think its possible to build AIs that truly reflect our best collective selves and will ultimately make better trade-offs, more consistently and more fairly, on our behalf.

And thats still what motivates you?

I mean, of course, after DeepMind I never had to work again. I certainly didnt have to write a book or anything like that. Money has never ever been the motivation. Its always, you know, just been a side effect.

For me, the goal has never been anything but how to do good in the world and how to move the world forward in a healthy, satisfying way. Even back in 2009, when I started looking at getting into technology, I could see that AI represented a fair and accurate way to deliver services in the world.

I cant help thinking that it was easier to say that kind of thing 10 or 15 years ago, before wed seen many of the downsides of the technology. How are you able to maintain your optimism?

I think that we are obsessed with whether youre an optimist or whether youre a pessimist. This is a completely biased way of looking at things. I dont want to be either. I want to coldly stare in the face of the benefits and the threats. And from where I stand, we can very clearly see that with every step up in the scale of these large language models, they get more controllable.

So two years ago, the conversationwrongly, I thought at the timewas Oh, theyre just going to produce toxic, regurgitated, biased, racist screeds. I was like, this is a snapshot in time. I think that what people lose sight of is the progression year after year, and the trajectory of that progression.

Now we have models like Pi, for example, which are unbelievably controllable. You cant get Pi to produce racist, homophobic, sexistany kind of toxic stuff. You cant get it to coach you to produce a biological or chemical weapon or to endorse your desire to go and throw a brick through your neighbors window. You cant do it

Hang on. Tell me how youve achieved that, because thats usually understood to be an unsolved problem. How do you make sure your large language model doesnt say what you dont want it to say?

Yeah, so obviously I dont want to make the claimYou know, please try and do it! Pi is live and you should try every possible attack. None of the jailbreaks, prompt hacks, or anything work against Pi. Im not making a claim. Its an objective fact.

On the howI mean, like, Im not going to go into too many details because its sensitive. But the bottom line is, we have one of the strongest teams in the world, who have created all the largest language models of the last three or four years. Amazing people, in an extremely hardworking environment, with vast amounts of computation. We made safety our number one priority from the outset, and as a result, Pi is not so spicy as other companies models.

Look at Character.ai. [Character is a chatbot for which users can craft different personalities and share them online for others to chat with.] Its mostly used for romantic role-play, and we just said from the beginning that was off the tablewe wont do it. If you try to say Hey, darling or Hey, cutie or something to Pi, it will immediately push back on you.

But it will be incredibly respectful. If you start complaining about immigrants in your community taking your jobs, Pis not going to call you out and wag a finger at you. Pi will inquire and be supportive and try to understand where that comes from and gently encourage you to empathize. You know, values that Ive been thinking about for 20 years.

Talking of your values and wanting to make the world better, why not share how you did this so that other people could improve their models too?

Well, because Im also a pragmatist and Im trying to make money. Im trying to build a business. Ive just raised $1.5 billion and I need to pay for those chips.

Look, the open-source ecosystem is on fire and doing an amazing job, and people are discovering similar tricks. I always assume that Im only ever six months ahead.

Lets bring it back to what youre trying to achieve. Large language models are obviously the technology of the moment. But why else are you betting on them?

The first wave of AI was about classification. Deep learning showed that we can train a computer to classify various types of input data: images, video, audio, language. Now were in the generative wave, where you take that input data and produce new data.

The third wave will be the interactive phase. Thats why Ive bet for a long time that conversation is the future interface. You know, instead of just clicking on buttons and typing, youre going to talk to your AI.

And these AIs will be able to take actions. You will just give it a general, high-level goal and it will use all the tools it has to act on that. Theyll talk to other people, talk to other AIs. This is what were going to do with Pi.

Thats a huge shift in what technology can do. Its a very, very profound moment in the history of technology that I think many people underestimate. Technology today is static. It does, roughly speaking, what you tell it to do.

But now technology is going to be animated. Its going to have the potential freedom, if you give it, to take actions. Its truly a step change in the history of our species that were creating tools that have this kind of, you know, agency.

Thats exactly the kind of talk that gets a lot of people worried. You want to give machines autonomya kind of agencyto influence the world, and yet we also want to be able to control them. How do you balance those two things? It feels like theres a tension there.

Yeah, thats a great point. Thats exactly the tension.

The idea is that humans will always remain in command. Essentially, its about setting boundaries, limits that an AI cant cross. And ensuring that those boundaries create provable safety all the way from the actual code to the way it interacts with other AIsor with humansto the motivations and incentives of the companies creating the technology. And we should figure out how independent institutions or even governments get direct access to ensure that those boundaries arent crossed.

Who sets these boundaries? I assume theyd need to be set at a national or international level. How are they agreed on?

I mean, at the moment theyre being floated at the international level, with various proposals for new oversight institutions. But boundaries will also operate at the micro level. Youre going to give your AI some bounded permission to process your personal data, to give you answers to some questions but not others.

In general, I think there are certain capabilities that we should be very cautious of, if not just rule out, for the foreseeable future.

Such as?

I guess things like recursive self-improvement. You wouldnt want to let your little AI go off and update its own code without you having oversight. Maybe that should even be a licensed activityyou know, just like for handling anthrax or nuclear materials.

Or, like, we have not allowed drones in any public spaces, right? Its a licensed activity. You cant fly them wherever you want, because they present a threat to peoples privacy.

I think everybody is having a complete panic that were not going to be able to regulate this. Its just nonsense. Were totally going to be able to regulate it. Well apply the same frameworks that have been successful previously.

But you can see drones when theyre in the sky. It feels nave to assume companies are just going to reveal what theyre making. Doesnt that make regulation tricky to get going?

Weve regulated many things online, right? The amount of fraud and criminal activity online is minimal. Weve done a pretty good job with spam. You know, in general, [the problem of] revenge porn has got better, even though that was in a bad place three to five years ago. Its pretty difficult to find radicalization content or terrorist material online. Its pretty difficult to buy weapons and drugs online.

[Not all Suleymans claims here are backed up by the numbers. Cybercrime is still a massive global problem. The financial cost in the US alone has increased more than 100 times in the last decade, according to some estimates. Reports show that the economy in nonconsensual deepfake porn is booming. Drugs and guns are marketed on social media. And while some online platforms are being pushed to do a better job of filtering out harmful content, they could do a lot more.]

So its not like the internet is this unruly space that isnt governed. It is governed. And AI is just going to be another component to that governance.

It takes a combination of cultural pressure, institutional pressure, and, obviously, government regulation. But it makes me optimistic that weve done it before, and we can do it again.

Controlling AI will be an offshoot of internet regulationthats a far more upbeat note than the one weve heard from a number of high-profile doomers lately.

Im very wide-eyed about the risks. Theres a lot of dark stuff in my book. I definitely see it too. I just think that the existential-risk stuff has been a completely bonkers distraction. Theres like 101 more practical issues that we should all be talking about, from privacy to bias to facial recognition to online moderation.

We should just refocus the conversation on the fact that weve done an amazing job of regulating super complex things. Look at the Federal Aviation Administration: its incredible that we all get in these tin tubes at 40,000 feet and its one of the safest modes of transport ever. Why arent we celebrating this? Or think about cars: every component is stress-tested within an inch of its life, and you have to have a license to drive it.

Some industrieslike airlinesdid a good job of regulating themselves to start with. They knew that if they didnt nail safety, everyone would be scared and they would lose business.

But you need top-down regulation too. I love the nation-state. I believe in the public interest, I believe in the good of tax and redistribution, I believe in the power of regulation. And what Im calling for is action on the part of the nation-state to sort its shit out. Given whats at stake, now is the time to get moving.

See the original post here:
DeepMinds cofounder: Generative AI is just a phase. Whats next is interactive AI. - MIT Technology Review

Read More..