Page 586«..1020..585586587588..600610..»

Morocco: Tangier Open promoting chess dreams among the young – Africanews English

Chess strategists, novices and enthusiasts alike gathered in Tangier for the OPEN DE TANGER Chess Tournament.

The event brought together some 100 top players, all determined to win the top prize in a merciless competition.

"Today, over 100 players took part in our tournament. Tournaments are an opportunity for players to test and evaluate themselves. For me, this is essential," said Oubai Ziani, co-founder of On Chess Coaching.

For Mohamed Amin El Fezari, who has turned pro in the sport, the aim is to improve his international ranking and win several titles. "But unfortunately, in Morocco, we don't have any official tournaments," laments the young player.

Behind this celebration lies a complex reality. Tensions, organizational problems and a lack of support have all taken their toll on the sport.

Zoheir Slami, an international referee, admits "there's a lack of a federation run by competent people."

The organizers are striving to promote healthy competition in the chess world to enable young talent to shine on an international scale.

The OPEN DE TANGER Chess Tournament is crucial to promoting this discipline, which is still too neglected by public authorities and sponsors.

Read the rest here:
Morocco: Tangier Open promoting chess dreams among the young - Africanews English

Read More..

Bitcoin in danger as we are ‘one chess move away’ from ‘big problems’ – Finbold – Finance in Bold

When Satoshi Nakamoto designed Bitcoin (BTC), they based its security on a consensus mechanism called Proof-Of-Work (PoW). However, given its consensus decentralization current status, Bitcoins security might be in peril.

At least, this is what the crypto researcher Chris Blec thinks, according to a post on X on December 14:

This is not a good chart. 2 mining pools (both of them force all miners to KYC) comprise 55% of the Bitcoin hash rate. We could be one chess move away from some big problems for Bitcoin. But even worse is the fact that nobody really wants to talk about it. Wheres the urgency?

Notably, the mentioned chart shows Foundry USA and AntPool with 27.6% of Bitcoins global hashrate each. Both Bitcoin mining pools are Bitcoin mining companies cooperatives seeking to improve block discovery and, consequentially, their profits.

More than just pointing to the worrying dominance of two pools on Bitcoins consensus, Chris Blec also calls the community out on a lack of awareness about this problem. Interestingly, Finbold reported about it on September 23: This is how centralized Bitcoin mining has become over the years.

As it is today, Bitcoin mining pools work centralized in the pools coordinator. It is the coordinator who creates the block template (adding transactions from the mempool), filters unwanted transactions, discovers the next block using their miners hashrate, broadcasts the mined block to the network, collects the mining reward, and distributes it proportionally to the miners.

Therefore, it is this single entity that performs the relevant actions that directly impact Bitcoins security. Recently, we have seen mining pools arbitrarily deciding not to pay their miners on specific occasions. In particular, F2Pool and AntPool both with 8.8% and 27.6% of the global block discovery rate, respectively.

In yet another episode, a newly created Bitcoin mining pool was also accused of deliberately filtering privacy-related BTC transactions.

Essentially, these events evidence the importance of having a decentralized pool-based consensus. Moreover, a Bitcoin Core developer explained why BTC transactions should wait two hours to be considered safe. Also affected by the current state of low decentralization.

All things considered, Bitcoins value proposition is directly related to its security and decentralization. It is possible that the current state of the network could interfere with the markets perception of the value of the leading cryptocurrency.

Disclaimer: The content on this site should not be considered investment advice. Investing is speculative. When investing, your capital is at risk.

Read the rest here:
Bitcoin in danger as we are 'one chess move away' from 'big problems' - Finbold - Finance in Bold

Read More..

AI scientists make exciting discovery using chatbots to solve maths problems – The Guardian

Science

Breakthrough suggests technology behind ChatGPT and Bard can generate information that goes beyond human knowledge

Artificial intelligence researchers claim to have made the worlds first scientific discovery using a large language model, a breakthrough that suggests the technology behind ChatGPT and similar programs can generate information that goes beyond human knowledge.

The finding emerged from Google DeepMind, where scientists are investigating whether large language models, which underpin modern chatbots such as OpenAIs ChatGPT and Googles Bard, can do more than repackage information learned in training and come up with new insights.

When we started the project there was no indication that it would produce something thats genuinely new, said Pushmeet Kohli, the head of AI for science at DeepMind. As far as we know, this is the first time that a genuine, new scientific discovery has been made by a large language model.

Large language models, or LLMs, are powerful neural networks that learn the patterns of language, including computer code, from vast amounts of text and other data. Since the whirlwind arrival of ChatGPT last year, the technology has debugged faulty software and churned out everything from college essays and travel itineraries to poems about climate change in the style of Shakespeare.

But while the chatbots have proved extremely popular, they do not generate new knowledge and are prone to confabulation, leading to answers that, in keeping with the best pub bores, are fluent and plausible but badly flawed.

To build FunSearch, short for searching in the function space, DeepMind harnessed an LLM to write solutions to problems in the form of computer programs. The LLM is paired with an evaluator that automatically ranks the programs by how well they perform. The best programs are then combined and fed back to the LLM to improve on. This drives the system to steadily evolve poor programs into more powerful ones that can discover new knowledge.

The researchers set FunSearch loose on two puzzles. The first was a longstanding and somewhat arcane challenge in pure mathematics known as the cap set problem. It deals with finding the largest set of points in space where no three points form a straight line. FunSearch churned out programs that generate new large cap sets that go beyond the best that mathematicians have come up with.

The second puzzle was the bin packing problem, which looks for the best ways to pack items of different sizes into containers. While it applies to physical objects, such as the most efficient way to arrange boxes in a shipping container, the same maths applies in other areas, such as scheduling computing jobs in datacentres. The problem is typically solved by either packing items into the first bin that has room, or into the bin with the least available space where the item will still fit. FunSearch found a better approach that avoided leaving small gaps that were unlikely ever to be filled, according to results published in Nature.

In the last two or three years there have been some exciting examples of human mathematicians collaborating with AI to obtain advances on unsolved problems, said Sir Tim Gowers, professor of mathematics at Cambridge University, who was not involved in the research. This work potentially gives us another very interesting tool for such collaborations, enabling mathematicians to search efficiently for clever and unexpected constructions. Better still, these constructions are humanly interpretable.

Researchers are now exploring the range of scientific problems FunSearch can handle. A major limiting factor is that the problems need to have solutions that can be verified automatically, which rules out many questions in biology, where hypotheses often need to be tested with lab experiments.

The more immediate impact may be for computer programmers. For the past 50 years, coding has largely improved through humans creating ever more specialised algorithms. This is actually going to be transformational in how people approach computer science and algorithmic discovery, said Kohli. For the first time, were seeing LLMs not taking over, but definitely assisting in pushing the boundaries of what is possible in algorithms.

Jordan Ellenberg, professor of mathematics at the University of Wisconsin-Madison, and co-author on the paper, said: What I find really exciting, even more so than the specific results we found, is the prospects it suggests for the future of human-machine interaction in math.

Instead of generating a solution, FunSearch generates a program that finds the solution. A solution to a specific problem might give me no insight into how to solve other related problems. But a program that finds the solution, thats something a human being can read and interpret and hopefully thereby generate ideas for the next problem and the next and the next.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

One-timeMonthlyAnnual

Other

Original post:
AI scientists make exciting discovery using chatbots to solve maths problems - The Guardian

Read More..

DeepMind’s AI finds new solution to decades-old math puzzle outsmarting humans – TNW

DeepMind has used a large language model (LLM) to generate a novel solution to one of humanitys toughest math problems in a breakthrough that could herald a new era in AI development.

The model, known as FunSearch, discovered a solution to the so-called cap set puzzle. The decades-old math conundrum essentially comes down to how many dots you can joint down on a page while drawing lines between them, without three of them ever forming a straight line.

If that gave you a migraine, dont worry. Whats important to note is that the problem has never been solved, and researchers have only ever found solutions for small dimensions. Until now.

FunSearch successfully discovered new constructions for large cap sets that far exceeded the best-known ones. While the LLM didnt solve the cap set problem once and for all (contrary to some of the news headlines swirling around), it did find facts new to science.

To the best of our knowledge, this shows the first scientific discovery a new piece of verifiable knowledge about a notorious scientific problem using an LLM, wrote the researchers in a paper published in Nature this week.

In previous experiments, researchers have used large language models to solve maths problems with known solutions.

FunSearch works by combining a pre-trained LLM, in this case a version of Googles PaLM 2, with an automated evaluator. This fact-checker guards against the production of false information.

LLMs have been shown to regularly produce so-called hallucinations basically when they just make shit up and present it as fact. This has, naturally, limited their usefulness in making verifiable scientific discoveries. However, researchers at the London-based lab claim that the use of an in-built fact-checker makes FunSearch different.

FunSearch engages in a continuous back-and-forth dance between the LLM and the evaluator. This process transforms initial solutions into new knowledge.

What also makes the tool quite promising for scientists is that it outputs programs that reveal how its solutions are constructed, rather than just what the solutions are.

We hope this can inspire further insights in the scientists who use FunSearch, driving a virtuous cycle of improvement and discovery, said the researchers.

Go here to see the original:
DeepMind's AI finds new solution to decades-old math puzzle outsmarting humans - TNW

Read More..

DeepMind AI with built-in fact-checker makes mathematical discoveries – New Scientist

DeepMinds FunSearch AI can tackle mathematical problems

alengo/Getty Images

Google DeepMind claims to have made the first ever scientific discovery with an AI chatbot by building a fact-checker to filter out useless outputs, leaving only reliable solutions to mathematical or computing problems.

Previous DeepMind achievements, such as using AI to predict the weather or protein shapes, have relied on models created specifically for the task at hand, trained on accurate and specific data. Large language models (LLMs), such as GPT-4 and Googles Gemini, are instead trained on vast amounts of varied data to create a breadth of abilities. But that approach also makes them susceptible to hallucination, a term researchers use for producing false outputs.

Gemini which was released earlier this month has already demonstrated a propensity for hallucination, getting even simple facts such as the winners of this years Oscars wrong. Googles previous AI-powered search engine even made errors in the advertising material for its own launch.

One common fix for this phenomenon is to add a layer above the AI that verifies the accuracy of its outputs before passing them to the user. But creating a comprehensive safety net is an enormously difficult task given the broad range of topics that chatbots can be asked about.

Alhussein Fawzi at Google DeepMind and his colleagues have created a generalised LLM called FunSearch based on Googles PaLM2 model with a fact-checking layer, which they call an evaluator. The model is constrained to providing computer code that solves problems in mathematics and computer science, which DeepMind says is a much more manageable task because these new ideas and solutions are inherently and quickly verifiable.

The underlying AI can still hallucinate and provide inaccurate or misleading results, but the evaluator filters out erroneous outputs and leaves only reliable, potentially useful concepts.

We think that perhaps 90 per cent of what the LLM outputs is not going to be useful, says Fawzi. Given a candidate solution, its very easy for me to tell you whether this is actually a correct solution and to evaluate the solution, but actually coming up with a solution is really hard. And so mathematics and computer science fit particularly well.

DeepMind claims the model can generate new scientific knowledge and ideas something LLMs havent done before.

To start with, FunSearch is given a problem and a very basic solution in source code as an input, then it generates a database of new solutions that are checked by the evaluator for accuracy. The best of the reliable solutions are given back to the LLM as inputs with a prompt asking it to improve on the ideas. DeepMind says the system produces millions of potential solutions, which eventually converge on an efficient result sometimes surpassing the best known solution.

For mathematical problems, the model writes computer programs that can find solutions rather than trying to solve the problem directly.

Fawzi and his colleagues challenged FunSearch to find solutions to the cap set problem, which involves determining patterns of points where no three points make a straight line. The problem gets rapidly more computationally intensive as the number of points grows. The AI found a solution consisting of 512 points in eight dimensions, larger than any previously known.

When tasked with the bin-packing problem, where the aim is to efficiently place objects of various sizes into containers, FunSearch found solutions that outperform commonly used algorithms a result that has immediate applications for transport and logistics companies. DeepMind says FunSearch could lead to improvements in many more mathematical and computing problems.

Mark Lee at the University of Birmingham, UK, says the next breakthroughs in AI wont come from scaling-up LLMs to ever-larger sizes, but from adding layers that ensure accuracy, as DeepMind has done with FunSearch.

The strength of a language model is its ability to imagine things, but the problem is hallucinations, says Lee. And this research is breaking that problem: its reining it in, or fact-checking. Its a neat idea.

Lee says AIs shouldnt be criticised for producing large amounts of inaccurate or useless outputs, as this is not dissimilar to the way that human mathematicians and scientists operate: brainstorming ideas, testing them and following up on the best ones while discarding the worst.

Topics:

Here is the original post:
DeepMind AI with built-in fact-checker makes mathematical discoveries - New Scientist

Read More..

Google’s DeepMind creates generative AI model with fact checker to crack unsolvable math problem – SiliconANGLE News

Google LLCs DeepMind artificial intelligence research unit claims to have cracked an unsolvable math problem using a large language model-based chatbot equipped with a fact-checker to filter out useless outputs.

By using a filter, DeepMind researchers say the LLM can generate millions of responses, but only submit the ones that can be verified as accurate.

Its a milestone achievement, as previous DeepMind breakthroughs have generally relied on AI models that were specifically created to solve the task in hand, such as predicting weather or designing new protein shapes. Those models were trained on very accurate and specific datasets, which makes them quite different from LLMs such as OpenAIs GPT-4 or Googles Gemini.

Those LLMs are trained on vast and varied datasets, enabling them to perform a wide range of tasks and talk about almost any subject. But the approach carries risks, as LLMs are susceptible to so-called hallucinations, which is the term for producing false outputs.

Hallucinations are a big problem for LLMs. Gemini, which was only released this month and is said to be Googles most capable LLM ever, has already shown its vulnerable, inaccurately answering fairly simple questions such as who won this years Oscars.

Researchers believe that hallucinations can be fixed by adding a layer above the AI model that verifies the accuracy of its outputs before passing them onto users. But this kind of safety net is tricky to build when LLMs have been trained to discuss such a wide range of topics.

At DeepMind, Alhussein Fawzi and his team members created a generalized LLM called FunSearch, which is based on Googles PaLM2 model. They added a fact-checking layer, called an evaluator. In this case, FunSearch has been geared to solving only math and computer science problems by generating computer code. According to DeepMind, this makes it easier to create a fact-checking layer, because its outputs can be rapidly verified.

Although the FunSearch model is still susceptible to hallucinations and generating inaccurate or misleading results, the evaluator can easily filter them out, and ensure the user only receives reliable outputs.

We think that perhaps 90% of what the LLM outputs is not going to be useful, Fawzi said. Given a candidate solution, its very easy for me to tell you whether this is actually a correct solution and to evaluate the solution, but actually coming up with a solution is really hard. And so mathematics and computer science fit particularly well.

According to Fawzi, FunSearch is able to generate new scientific knowledge and ideas, which is a new milestone for LLMs.

The researchers tested its abilities by giving it a problem, plus a very basic solution in source code, as an input. Then, the model generated a database of new solutions that were checked by the evaluator for their accuracy. The most reliable of those solutions are then fed back into the LLM as inputs, together with a prompt asking it to improve on its ideas. According to Fawzi, by doing it this way, FunSearch produces millions of potential solutions that eventually converge to create the most efficient result.

When tasked with mathematical problems, FunSearch writes computer code that can find the solution, rather than trying to tackle it directly.

Fawzi and his team tasked FunSearch with finding a solution to the cap set problem, which involves determining patterns in points, where no three points make a straight line. As the number of points grows, the problem becomes vastly more complex.

However, FunSearch was able to create a solution consisting of 512 points across eight dimensions, which is larger than any human mathematician has managed. The results of the experiment were published in the journal Nature.

Although most people are unlikely ever to come across the cap set problem, let alone attempt to solve it, its an important achievement. Even the best human mathematicians do not agree on the best way to solve this challenge. According to Terence Tao, a professor at the University of California, who describes the cap set problem as his favorite open question, FunSearch is an extremely promising paradigm since it can potentially be applied to many other math problems.

FunSearch proved as much when tasked with the bin-packing problem, where the goal is to efficiently place objects of different sizes into the least number of containers as is possible. Fawzi said FunSearch was able to find solutions that outperform the best algorithms created to solve this particular problem. Its results could have significant implications in industries such as transport and logistics.

FunSearch is also notable because, unlike with other LLMs, users can actually see how it goes about generating its outputs, meaning they can learn from it. This sets it apart from other LLMs, where the AI is more akin to a black box.

TheCUBEis an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate thecontent you create as well Andy Jassy

THANK YOU

The rest is here:
Google's DeepMind creates generative AI model with fact checker to crack unsolvable math problem - SiliconANGLE News

Read More..

Google DeepMind announces the FunSearch training method – Gizchina.com

Google DeepMind has introduced a new method called FunSearch. This method uses Large Language Models (LLMs) to search for new solutions in mathematics and computer science. The method is described in a paper published in Nature. FunSearch is an evolutionary method that promotes and develops the highest-scoring ideas which reflects as computer programs. The running and evaluation of these programs are automatic. The system selects some programs from the current pool of programs, which are fed to an LLM. The LLM creatively builds upon these and generates new programs, which are automatically evaluated. The best ones are added back to the pool of existing programs, creating a self-improving loop. FunSearch uses Googles PaLM 2, but it is compatible with other LLMs trained on code.

According to Google,FunSearch can calculate upper-limit problems and a series of complex problems involving mathematics and computer science. FunSearch model training method mainly introduces an Evaluator system for the AI model. The AI model outputs a series of creative problem-solving methods and evaluation methods. The processor is responsible for judging the problem-solving method output by the model. After multiple iterations, an AI model with stronger mathematical capabilities can join the training.

Google DeepMind used the PaLM 2 model for testing. The researchers established a dedicated code pool, used code form to input a series of questions for the model, and set up an evaluator process. After that, the model would automatically be drawn from the code pool in each iteration. Select problems, generate creative new solutions and submit them to the evaluator for evaluation. The best solution will be re-added to the code pool and start another iteration.

FunSearch uses an iterative procedure. First, the user writes a description of the problem in the form of code. This description comprises a procedure to evaluate programs, and a seed program used to initialize a pool of programs. At each iteration, the system selects some programs from the current pool of programs, which are fed to an LLM. The LLM creatively builds upon these and generates new programs which get automatic evaluation. The best ones are added back to the pool of existing programs, creating a self-improving loop. FunSearch uses Googles PaLM 2, but it is compatible with other LLMs trained on code.

Discovering new mathematical knowledge and algorithms in different domains is notoriously and largely beyond the power of the most advanced AI systems. To tackle such challenging problems with FunSearch, there is a need to use multiple key components. FunSearch generates programs that describe how those solutions were arrived at. This show-your-working approach is how scientists generally operate, with discoveries or phenomena explained through the process used to produce them. FunSearch favours finding solutions represented by highly compact programs solutions with a low Short program can describe very large objects, allowing FunSearch to scale.

Google said the FunSearch training method is particularly good at Discrete Mathematics (Combinatorics). The model trained by the training method can easily solve extreme value combinatorial mathematics problems. The researchers introduced in a press release a process method for model calculation of upper-level problems (a central problem in mathematics involving counting and permutations).

To test its versatility, the researchers used FunSearch to approach another hard problem in math: the bin packing problem, which involves trying. The researchers left out the lines in the program that would specify how to solve it. That is where FunSearch comes in. It gets Codey to fill in the blanksin effect, to suggest code that will solve the problem. A second algorithm then checks and scores what Codey comes up with. The best suggestionseven if not yet correctare saved and given back to Codey, which tries to complete the program again. Many will be nonsensical, some will be sensible, and a few will be decent, says Kohli. You take those truly inspired ones and you say, Okay, take these ones and repeat..

The Bin Packing Problem is a problem of putting items of different sizes into a minimum number of containers. FunSearch provides a solution for the Bin Packing Problem. This is a just-in-time solution that generates a program that automatically adjusts based on the existing volume of the item.Researchers mentioned that comparedwith other AI training methods that use neural networks to learn, the output code of the model trained by the FunSearch training method is easier to check and deploy. This means that it is easier to be integrated into the actual industrial environment.

The AI system, called FunSearch, made progress on Set-inspired problems in combinatorics. This is a field of mathematics that studies how to count the possible arrangements of sets. FunSearch automatically creates requests for a specially trained LLM, asking it to write short computer programs that can generate solutions to a particular scenario. The system then checks quickly to see whether those solutions are better than known ones. If not, it provides feedback to the LLM so that it can improve at the next round. The way we use the LLM is as a creativity engine, says DeepMind computer scientist Bernardino. Not all programs that the LLM generates are useful, and some are so incorrect that they wouldnt even be able to run, he says.

What I find really exciting, even more so than the specific results we found, is the prospects it suggests for the future of human-machine interaction in math. Instead of generating a solution, FunSearch generates a program that finds the solution. A solution to a specific problem might give me no insight into how to solve other related problems. But a program that finds the solution Artificial intelligence researchers claim to have made the worlds first scientific discovery using a breakthrough that suggests the technology behind ChatGPT and similar programs can generate.

FunSearch is a new method that uses Large Language Models (LLMs) to search for new solutions in mathematics and computer science. The details of the description of this method are in an academic paper in Nature, a top academic journal. FunSearch is an evolutionary method that promotes and develops the highest-scoring ideas in computer programs. The process of running and evaluating these programs is automatic. The system selects some programs from the current pool of programs, which are fed to an LLM. FunSearch uses Googles PaLM 2, but it is compatible with other LLMs that use the same code for training. FunSearch can improve manufacturing algorithms thereby optimizing logistics, and reducing energy consumption.

Efe Udin is a seasoned tech writer with over seven years of experience. He covers a wide range of topics in the tech industry from industry politics to mobile phone performance. From mobile phones to tablets, Efe has also kept a keen eye on the latest advancements and trends. He provides insightful analysis and reviews to inform and educate readers. Efe is very passionate about tech and covers interesting stories as well as offers solutions where possible.

See the original post:
Google DeepMind announces the FunSearch training method - Gizchina.com

Read More..

DeepMind claims its AI can tackle unsolved mathematics – SiliconRepublic.com

The company said its FunSearch AI model has an automated evaluator to prevent hallucinations, allowing the model to find the best answers for advanced problems.

Google-owned DeepMind claims one of its AI models found a new answer for an unsolved mathematical problem, by tackling one of the biggest issues in large language models (LLMs).

This key issue is the tendency for these AI models to share factually incorrect information, which is commonly referred to as hallucinations. This issue has been noted in many popular AI models, such as ChatGPT which has faced lawsuits for defaming individuals.

DeepMind claims its AI model FunSearch tackles this issue by including an automated evaluator that protects against hallucinations and incorrect ideas.

The company tested this model on an unsolved maths problem known as the cap set problem, which involves finding the largest size of a certain type of set. DeepMind claims FunSearch discovered new constructions of large cap sets that go beyond the best known ones.

In addition, to demonstrate the practical usefulness of FunSearch, we used it to discover more effective algorithms for the bin-packing problem, which has ubiquitous applications such as making data centres more efficient, DeepMind said in a blogpost.

The AI model contains the automated evaluator and a pre-trained LLM that aims to provide creative solutions to problems. DeepMind claims the back-and-forth of these two components creates an evolutionary method of finding the best ways to solve a problem.

Problems are presented to the AI model in the form of code, which includes a procedure to evaluate programs and a seed program used to initialise a pool of programs. DeepMind said FunSearch then selects some programs and creatively builds upon them.

The results are evaluated and the best ones are added back to the pool of existing programs, which creates a self-improving loop according to DeepMind.

FunSearch demonstrates that if we safeguard against LLMs hallucinations, the power of these models can be harnessed not only to produce new mathematical discoveries, but also to reveal potentially impactful solutions to important real-world problems, DeepMind said.

DeepMind has claimed to hit multiple breakthroughs with the power of AI. Last year, DeepMind claimed its AlphaFold model predicted the structure of nearly every protein known to science more than 200m in total.

At the end of October, DeepMind claimed the next version of AlphaFold can predict nearly all molecules in the Protein Data Bank a database for the 3D structures of various biological molecules.

DeepMind also claims that one of its AI models GraphCast canpredict weather conditions up to 10 days in advance and in a more accurate way than standard industry methods. Meanwhile, the company claims one of its AI models has been used by researchers to create hundreds of new materials in laboratory settings.

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republics digest of essential sci-tech news.

Read the original post:
DeepMind claims its AI can tackle unsolved mathematics - SiliconRepublic.com

Read More..

Google’s AI powerhouse finds millions of new crystals that could change the fate of humanity forever and, for better … – TechRadar

Researchers at Google DeepMind have used artificial intelligence to discover new crystals and inorganic materials that could power future technologies as part of a landmark study.

Using the Graph Networks for Materials Exploration (GNoME) deep learning tool, researchers found 2.2 million new crystals, including 380,000 stable materials.

The discovery could represent a landmark moment in the discovery of materials used to power modern technologies, such as computer chips, batteries, and solar panels - all of which rely on inorganic crystals.

Availability and stability of these materials is a common hurdle in the development of such technologies. However, researchers said that in using the GNoME AI tool, they were able to dramatically increase the speed and efficiency of discovery by predicting the stability of new materials.

To enable new technologies, crystals must be stable otherwise they can decompose, and behind each new, stable crystal can be months of painstaking experimentation, the study noted.

With GNoME, weve multiplied the number of technologically viable materials known to humanity. Of its 2.2 million predictions, 380,000 are the most stable, making them promising candidates for experimental synthesis.

Among these candidates are materials that have the potential to develop future transformative technologies ranging from superconductors, powering supercomputers, and next-generation batteries to boost the efficiency of electric vehicles.

Google DeepMinds findings were published in the Nature journal, with the firm noting that, over the last decade, more than 28,000 new materials have been discovered following extensive research.

However, traditional AI-based approaches to searching for novel crystal structures has typically been an expensive, trial-and-error process that could take months to deliver minimal results.

AI-guided approaches hit a fundamental limit in their ability to accurately predict materials that could be experimentally viable, the study said.

GNoMEs recent discovery of 2.2 million materials would be equivalent to about 800 years worth of knowledge, researchers said, which highlights the transformative power and accuracy now afforded to scientists operating in the field.

Around 52,000 new compounds similar to graphene have been discovered as part of the project, which the study said has the potential to revolutionize electronics and superconductor development.

In previous years, just 1,000 materials of this kind had been identified via previous techniques.

We also found 528 potential lithium ion conductors, 25 times more than a previous study, which could be used to improve the performance of rechargeable batteries.

Long-term, Google DeepMind researchers said the GNoME project aims to drive down the cost of discovering new materials.

So far, external researchers have created 736 of materials discovered through GNoME in a lab environment. The company also plans to release its database of newly discovered crystals and share its findings with the research community.

By giving scientists the full catalog of the promising recipes for new candidate materials, we hope this helps them to test and potentially make the best ones.

View original post here:
Google's AI powerhouse finds millions of new crystals that could change the fate of humanity forever and, for better ... - TechRadar

Read More..

Discovering the Potential of AI in Mathematics: A Breakthrough by Google DeepMind – Medriva

Discovering the Potential of AI in Mathematics

Artificial Intelligence is making strides in various fields, and mathematics is no exception. Google DeepMinds new AI tool, FunSearch, is a significant breakthrough in this regard. The tool is not limited to specific tasks like its predecessors. Instead, it utilizes a large language model named Codey to unveil new mathematical functions, demonstrating the potential of AI in fundamental math and computer science.

The process is fascinating. FunSearch suggests code that potentially solves complex mathematical problems. The incorrect or nonsensical answers are rejected, and the good ones are plugged in. After millions of suggestions and repetitions, FunSearch has successfully produced a correct and previously unknown solution to the cap set problem, a niche but important problem in mathematics. This achievement is a testament to the fact that large language models are indeed capable of making groundbreaking discoveries.

The Google DeepMind FunSearch tool is a project developed by DeepMind with the primary goal of aiding researchers and developers in discovering and exploring new ideas and research in the field of artificial intelligence. It employs advanced algorithms to sift through a massive amount of AI-related content and deliver relevant and intriguing results.

The ultimate aim of FunSearch is to foster innovation and collaboration within the AI community. The tool has been instrumental in solving complex problems, such as the cap set problem, demonstrating its potential to revolutionize the way we approach problem-solving in various fields.

Despite the evident success of FunSearch, the researchers behind it confess that they dont fully understand why it works as efficiently as it does. However, the results are undeniable and speak volumes about the tools effectiveness. The FunSearch tool is a testament to the unexplored potential of artificial intelligence, demonstrating the capacity of AI to make meaningful contributions to diverse fields.

Developed by Googles AI research lab, DeepMind, the FunSearch tool is part of Googles broader efforts to enhance the quality of search results and provide users with a more personalized and engaging experience. It uses machine learning algorithms to analyze web pages and identify content that is likely to be entertaining or enjoyable for users. This feature makes FunSearch a user-friendly tool, as it helps users find fun and interesting content on the internet.

As artificial intelligence continues to evolve, tools like FunSearch are likely to become invaluable assets in various fields, including mathematics, computer science, and beyond. The success of FunSearch is a significant step forward in the journey of AI. It underscores the transformative potential of AI, which is no longer limited to the digital world but is also making substantial contributions to fundamental sciences.

Originally posted here:
Discovering the Potential of AI in Mathematics: A Breakthrough by Google DeepMind - Medriva

Read More..