Category Archives: Deep Mind

DeepMind Alum Wants to Use AI to Speed the Development of … – Data Center Knowledge

(Bloomberg) -- Ever since ChatGPT went virallast fall, companieshave touted many waysartificial intelligencecan make ourlives easier. Theyve promised superhuman virtual assistants, tutors, lawyers and doctors.

What about a superhuman chemical engineer?

Related: Data Center Sustainability: Green Solutions for the Future

London-based startupOrbital Materials would like tocreate just that. The startup is working to applygenerative AI the method behind tools like ChatGPT expresslyforaccelerating the development ofclean energytechnologies. Essentially, the idea is to make computer models powerful and sharp enough to identify the best formulas for products likesustainable jet fuelor batteries free of rare-earth minerals.

Jonathan Godwin, anOrbital Materials co-founder, imagines a system thats as accessible and effective as the software engineers use today to model designs for things likeairplane wings and household furniture.

Related: HPE Unveils AI Supercomputer Cloud Service

That, historically, has just been too difficult for molecular science, he said.

ChatGPT works because its adept at predicting text heres the next word or sentence that makes sense. For the same idea to work in chemistry, an AI system would need to predict how a new molecule would behave, not just in a lab but in the real world.

Several researchers and companies have deployedAI to hunt for newer, greener materials. Symyx Technologies, a materials discovery company formed in 1990s,wound down after a sale. More recentcompanies have gained traction makingpetrochemical alternativesandprogramming cells.

Still, for many materials needed to decarbonize the planet, the technology isntthere yet.

It cantake decadesfor a new advanced material to move from discovery to the market. That timeline is way too slow for the businesses and nations looking to rapidly cut emissions as they race to meetnet zero targets.

That needs to happen in the next 10 years, or sooner, said Aaike van Vugt, co-founder of material science startup VSParticle.

AI researchers think they can help.Before launching Orbital Materials, Godwin spent three years researching advanced material discovery atDeepMind, Googles AI lab. That lab releasedAlphaFold, amodel to predictprotein structures that could speed up the search for new drugs and vaccines.That, coupled with the rapidtakeoff of tools like ChatGPT, convinced him that AI would soon be capable of conquering the material world.

What I thought would take 10 years was happening in a matter of 18 months, he said. Things are getting better and better and better.

Godwin compareshis method withOrbital Materials toAI image generators like Dall-E and Stable Diffusion. Those models are created using billions of online images so that when users type in a text prompt, a photorealistic creation appears. Orbital Materials plans to trainmodels with loads of data on the molecular structure ofmaterials. Type in some desired property and material say, an alloy that can withstand very high heat and the model spits out a proposed molecular formula.

In theory, this approach is effective because it can both imagine new moleculesandmeasure howthey will work, said Rafael Gomez-Bombarelli, an assistant professor at MIT, who advisedOrbital Materials. (He said he is not an investor.)

Right now, many tech investors are prowling for companies that can turn a profit byimproving greener material production.Thats particularly the case in Europe, where regulatorsare forcing manufacturersto lower carbon emissions or face stiff fines. The markets for advanced materials in sectors like renewable energy, transportation and agriculture are set togrow by tens of billions of dollars in the coming years.

Some researchers, like those at theUniversity of Toronto, have set up self-driving labs that pair AI systems with robots to search for new materials at unparalleled speeds. Dutch startup VSParticle makes machinery used to develop components for gas sensors and green hydrogen.

Think of it like aDNA sequencer in a genomics lab, said co-founder van Vugt,who believes his equipment can help shorten the 20-year time horizon of advanced materials to one year, and, eventually,a couple of months. His company is currently raising investment capital.

Orbital Materials, which raised $4.8 million in previously undisclosed initial funding, is planning to start withturning its AI gaze towardcarbon capture. The startup is working on an algorithmic model that designsmolecular sieves, ortiny pellets installedwithin a device that can sift CO2 and other noxious chemicals from other emissions,more efficiently than current methods.(Godwin said the startup, which has several AI researchers, plans to publish peer-reviewed results on this tech soon.) Carbon capture has failed to work at scale to date, though thanks to a slew of government incentives,particularly in the US, interest in deploying the technology is rapidly ramping up.

Eventually, Godwin said Orbital Materials would like tomove into areas like fuel and batteries. He imagines mirroring thebusiness model ofsynthetic biology and drug discovery companies: develop the brainpower, then license out the software or novel materials to manufacturers. Its going to take us a little bit of time to get to market," said Godwin. "But once youre there, it happens very quickly.

But getting the AI right is only half the battle. Actually making advanced materialsin areas like battery and fuel production requires working with huge incumbent enterprises and messy supply chains. This can be even costlier than developing new drugs,argued MITs Gomez-Bombarelli.

The economics and de-risking make it just way harder, he said.

Heather Redman, a managing partner with Flying Fish Partners, which backedOrbital Materials, said most tech investors chasing the shiny penny of generative AI have failed to look at its applications outside of chatbots.She acknowledged the risks of startups working in the energy sector, butbelievesthe $1 trillion potential of markets like batteries and carbon capture are worth the investing risk.

We love big hills as long as theres a big gigantic market and opportunity at the top, she said.

Gomez-Bombarelli is aware how big these hills can be. He helped start a similar company to Orbital Materials in 2015, calledCalculario, which used AI and quantum chemistry to speed up the discovery process for a range ofnew materials.It didnt get enough traction and had to focus on the OLED industry.

Maybe we didnt make our case, he said. Or maybe the market wasnt ready.

Whether it is now is an open question. But there are encouraging signs.Computing certainly has improved. Newcomers might also have an easier time selling AI because would-be customers could more easily graspthe potential. Gomez-Bombarelli said the pitch is relatively simple:Look at ChatGPT. Wecan do the same thing for chemistry.

Excerpt from:
DeepMind Alum Wants to Use AI to Speed the Development of ... - Data Center Knowledge

Vines in Mind – richmondmagazine.com – Richmond magazine

When David Hunsaker and his wife, Barbara, get on the subject of tomatoes, youd better get comfortable, especially if theyve enjoyed a glass or two of sparkling ros. Known locally as the tomato king and queen, the vivacious duo and enthusiastic leaders of Village Garden farm over 300 varieties of the fruit on their small farm in Hanover that boasts a tomato-friendly terroir a trifecta of clay, dirt and sand.

For the past two years, the couple, along with enthusiastic oenophile and sommelier Jason Tesauro of Barboursville Vineyards, have been calling upon top chefs and food and beverage professionals across the state to elevate the humble tomato through the Summer Supper Somm dinner series. And in 2023, the ambitious ambassadors of the commonwealths bounty are back at it for a third go-round.

Its a good time and great food, its a different interaction, a little bit of a different experience, and thats exactly what were striving for, David says with a wide smile.

The tomato showcase held its first service of the year on June 26 at Shagbark and concludes on Aug. 13 at Zoes Steak & Seafood in Virginia Beach. Other events in the series include an Indian brunch at Lehja in Short Pump, a dinner at pioneering Parisian instiution LAuberge Chez Francois in Northern Virginia, a Church Hill tomato crawl, a walk-around tasting soiree at Lewis Ginter Botanical Garden and backyard party on the Hunsaker farm, in addition to many other juicy destinations in between. Each dinner includes pairings with wines from Barboursville.

The series has been garnering a fan base of returning guests much like concertgoers who get hooked and hop on tour to see their favorite musicians and participating restaurants. Tesauro notes that about one out of every four diners is a repeat attendee, food fanatics who get giddy over the varied and versatile produce just as much as the chefs who are uncovering its potential.

And while the founders of Summer Supper Somm have a fervent dedication to and reverence for the fruit behind their series, that deep-seeded admiration is simply part of the events enticing nature, and its exploration of tomatoes rustic roots and endless possibilities.

When asked how theyll keep things fresh and interesting during this years events, Barbara, who has a soft spot for a ribbed Tlacolula pink, replies without pause, I think the tomato does that for us.

While adhering to the series laid-back flair and not too many guard rails mantra, David says that with this third iteration, theyve learned that communal dining works best, and staggered seating not so much. This year, participating chefs are also encouraged to dig a little deeper, cracking open cookbooks of the past to gain inspiration for dishes that are historically inspired.

The first ketchup, the first tomato gravy, the first tomato aspic, all of these things have history, he says.

Newcomers on the bill for the nearly summerlong ode to tomatoes include 21 Spoons in Midlothian, Magnolias at The Mill in Purcellville, Michelin-starred Marcels by Robert Wiedmaier in Washington D.C., Lewis Ginter Botanical Garden, Acacia Midtown and Yellow Umbrella Provisions. During the Church Hill Tomato Crawl, Sub Rosa Bakery, 8 1/2 andCobra Burger will feature tomato-centric specials on their menu.

During this series, there will be an opportunity for people to have the light come on about whats available in their own neighborhood and out their back door, Tesauro says. These are tomatoes from where we live, this is wine thats grown a corks throw from where youre building your life right now. Connecting those dots has a transportive effect.

Beneficiaries of the event are The Holli Fund and SCAN. For more information on the series and the full list of events, visit instagram.com/summersuppersomm.

Excerpt from:
Vines in Mind - richmondmagazine.com - Richmond magazine

5 things about AI you may have missed today: Google to use public info to train AI, tech layoffs, more – HT Tech

After announcing Gemini, a project by DeepMind aiming to surpass artificial intelligence models like ChatGPT yesterday, Google has now updated its privacy policy and is suggesting that it will only be using publicly available data to train its AI models. In other news, there is a growing number of tech layoffs due to the rise of AI which has concerned many researchers who believed tech roles would largely remain safe initially. This and more in todays AI roundup. Let us take a closer look.

A report by Gizmodo has revealed that Google has updated its privacy policy and now suggests that it will use any data that is publicly available (can be read by Google) to train its AI models.

Google uses the information to improve our services and to develop new products, features, and technologies that benefit our users and the public. For example, we use publicly available information to help train Googles AI models and build products and features like Google Translate, Bard, and Cloud AI capabilities, mentions the new policy.

The most important part of this newly updated privacy policy is that earlier Google said that the data will be used for its language models, which have now been replaced with AI models. This raises serious concerns about the privacy of individuals who post things online, as now it is not just about who has access to the data but the users do not even control how the data can be used.

It was speculated that this was one of the reasons why both Reddit and Twitter made drastic policy changes to keep AI data harvesting at bay.

A new report by CNN claims that there is a growing number of layoffs occurring in the tech sector, a majority of which are linked to AI. Many employees have been fired and hiring has been frozen as companies figure out which roles can be taken over by AI.

Highlighting such an instance, the report mentioned IBM CEO Arvind Krishnas statements from an interview with Bloomberg where he mentioned that the company was going to stop its hiring to understand where AIs role can be more impactful.

Meesho and the Vision and AI Lab (VAL) of the Indian Institute of Science (IISc) will be collaborating together to conduct research into generative AI, a report by Business Standard said. The two have also signed a memorandum of understanding (MoU) of one year.

As per the MoU, Meesho will let its data scientists work with researchers from IISc to focus on multimodal representation learning and generative AI capabilities. Meesho believes this collaboration will result in the expansion of the e-retail sector by harnessing the abilities of AI.

The universities in the UK have prepared guiding principles around generative AI. The central focus is to provide education and awareness on such technologies as institutes struggle to adapt teaching and assessment methods to adjust to the rise of AI, a report by The Guardian stated.

Unlike the previous sentiments that AI should be banned from educational institutions, the new guidelines emphasize the need to learn and adapt to this growing technology to tap into its potential while also making the students informed about the risks of plagiarism, biases, and inaccuracies of AI.

An Instagram page by the username ai.magine_ has shared a series of photos showcasing characters from the popular sitcom Friends being reimagined in Indian ethnic dresses. The post showcases Chandler and Monica tying the knot at an Indian wedding. It also shows Joey attending the wedding wearing an Indian sherwani.

More here:
5 things about AI you may have missed today: Google to use public info to train AI, tech layoffs, more - HT Tech

Google launches Gemini: The groundbreaking AI project set to outperform ChatGPT – People Matters

Google is embarking on an exhilarating venture with the introduction of Gemini, a groundbreaking project that has the potential to revolutionize the AI industry. Unveiled at this year's Google I/O conference, there is significant anticipation surrounding Gemini's ability to surpass the performance of current AI systems, including OpenAI's ChatGPT.

Google's DeepMind, led by CEO Demis Hassabis, has given birth to Gemini, an innovative AI creation. With an aim to surpass existing AI models like ChatGPT, Gemini operates under the vision of tackling various data and tasks without relying on specialized models. It promises to generate unparalleled content that goes beyond the confines of its training data, marking a significant leap in the field of AI.

Drawing upon the remarkable victory of Google's AlphaGo in 2016, the development strategy for Gemini capitalizes on the techniques that propelled AlphaGo to success. By incorporating AlphaGo's problem-solving capabilities and integrating advanced language processing capabilities, Gemini represents a fusion of these strengths.

The project also embraces reinforcement learning, an iterative approach in which the software continuously endeavors to complete tasks and enhances its performance based on feedback.

With Gemini still in its developmental phase, the anticipated features of this system have already garnered widespread global attention. It is predicted that Gemini will introduce significant transformations in the AI landscape, particularly within the generative AI sector, which is projected to reach a value of 80.16 billion by 2030.

However, it is important to note that Gemini's current capabilities are primarily focused on text processing, unlike GPT-4, which possesses the ability to process images, audio, text, and video. Despite this limitation, Gemini aims to deliver more imaginative and creative responses, aiming to transcend the boundaries of its training data and generate unexpected content.

Previous AI endeavors by Google, such as the chatbot Bard, encountered obstacles that resulted in a factual error during its initial demonstration. This incident had a considerable impact on the market value of Alphabet, Google's parent company. Consequently, the expectations and demands for a flawless launch of Gemini are heightened. It is expected that the introduction of Gemini will undergo meticulous planning to avoid any potential mishaps.

With the ongoing development of Gemini, it emerges as a project of significant interest. The potential success of Gemini holds the power to redefine the AI industry and set unprecedented benchmarks for AI capabilities. Nonetheless, until the final version is released and evaluated in real-world scenarios, the question of whether it will surpass ChatGPT and other AI systems remains unanswered.

Read the original:
Google launches Gemini: The groundbreaking AI project set to outperform ChatGPT - People Matters

OpenAI Rival Inflection AI Raises $1.3B to Enhance Its Pi Chatbot – EnterpriseAI

(Ole.CNX/Shutterstock)

Palo Alto-based Inflection AI, an OpenAI competitor, announced it has raised $1.3 billion in a funding round led by Microsoft, Reid Hoffman, Bill Gates, Eric Schmidt, and new investor Nvidia for a total of $1.525 billion raised.

This latest round places the companys valuation around $4 billion, according to a Reuters report.

Inflection AI claims to be building the largest AI cluster in the world along with partners CoreWeave and Nvidia. When completed, the system will be comprised of 22,000 Nvidia H100 Tensor Core GPUs, a release stated.

The deployment of 22,000 NVIDIA H100 GPUs in one cluster is truly unprecedented and will support training and deployment of a new generation of large-scale AI models. Combined, the cluster develops a staggering 22 exaFLOPS in the 16-bit precision mode, and even more if lower precision is utilized, the company said.

The current system contains over 3,500 H100s and recently completed the reference training task of open source benchmark MLPerf in 11 minutes. Inflection AI collaborated with Nvidia and CoreWeave to run the MLPerf tests and fine-tune and optimize the cluster.

The company has developed a large language model, Inflection-1, that enables interaction with Pi, a personal AI chatbot. The company says the new funds will support its continued work of building and supporting Pi. Inflection AI describes Pi as a new class of AI designed to be a kind and supportive companion offering text and voice conversations, friendly advice, and concise information in a natural, flowing style.

Pi, which stands for personal intelligence, is marketed as an AI focused on prioritizing the interests of people both in its functionality and monetization. Imagine your personal AI companion with the single mission of making you happier, healthier, and more productive, wrote Mustafa Suleyman, CEO and co-founder of Inflection AI in a blog post.

Instead of the big tech companies prioritizing advertisers and content creators, Inflection AI is trying a different approach, Suleyman says.

We dont have all the answers, but we are setting out to develop a personal intelligence that really does work for you, thats in your corner, always on your team. Our mission is to firmly align your AI with you, and your interests, above all else. It means designing an AI that helps you articulate your intentions, organize your life and be there for you when you need it, he wrote.

Suleyman, a co-founder of Googles DeepMind, created Inflection AI in 2022 along with LinkedIn co-founder Reid Hoffman and DeepMind alum Karn Simonyan, Inflection AIs chief scientist.

The founders have made Inflection AI a public benefit corporation (PBC), which they say gives them a legal obligation to run the company in a way that balances the financial interests of stockholders, the best interests of people materially affected by our activities, and the promotion of our specific public benefit purpose, which is to develop products and technologies that harness the power of AI to improve human well-being and productivity, whilst respecting individual freedoms, working for the common good and ensuring our products widely benefit current and future generations.

A powerful benefit of the AI revolution is the ability to use natural, conversational language to interact with supercomputers to simplify aspects of our everyday lives, said Jensen Huang, founder and CEO of Nvidia in a release. The world-class team at Inflection AI is helping to lead this groundbreaking work, deploying NVIDIA AI technology to develop, train and deploy massive generative AI models that enable amazing personal digital assistants.

We are very excited to partner with Inflection AI, a pioneering AI company with an outstanding team, to bring the power of supercomputing to cutting edge consumer products, said Michael Intrator, CEO of CoreWeave.

Related

View post:
OpenAI Rival Inflection AI Raises $1.3B to Enhance Its Pi Chatbot - EnterpriseAI

5 things about AI you may have missed today: From AI hub, Gemini AI to crime in era of AI and much more – HT Tech

The artificial intelligence race has just got more intense. Google has introduced Gemini, a project by DeepMind, aiming to surpass AI models like ChatGPT; the India Electronics and Semiconductor Association (IESA) has declared Hyderabad as India's AI capital; this AI system, equipped with advanced image recognition, scrutinises waste processing and recycling facilities and AI-Generated images of Zomato's delivery agents go viral on social media - this and more in our daily AI roundup. Let us take a look.

Google introduces Gemini, a project by DeepMind, aiming to surpass AI models like ChatGPT. Gemini's versatility enables it to handle any data or task without specialised models, promising unique content beyond its training data. Building on AlphaGo's success, Gemini combines problem-solving techniques with advanced language processing and reinforcement learning. While limited to text processing, Gemini's creative potential sparks global interest in the rapidly expanding AI industry.

The India Electronics and Semiconductor Association (IESA) declares Hyderabad as India's AI capital, citing excellent leadership and a thriving ecosystem. Hyderabad becomes the prime choice for hosting flagship AI and Machine Learning events. After two years of virtual summits, IESA plans a grand physical event in September. The IESA president commended the city's exponential growth, with ongoing infrastructure development and new global enterprises.

With global solid waste production predicted to soar by 73% to 3.88 billion tonnes by 2050, a UK start-up called Grey Parrot has taken the challenge head-on. Their AI system, equipped with advanced image recognition, scrutinises waste processing and recycling facilities. Over 50 sites in Europe have cameras installed above conveyor belts, enabling real-time analysis of the continuous stream of waste. Greyparrot aims to revolutionise recycling efficiency and address the mounting plastic waste crisis.

A LinkedIn post by Sourabh Dhabhai showcases AI-generated images of Zomato delivery agents enjoying the Mumbai rains. The heartwarming concept reminds us to appreciate their moments of joy amid their work. The post garnered over 6k reactions, with people loving the idea and expressing gratitude for capturing such a sweet moment. AI continues to produce realistic and captivating visuals.

The Union Ministry of Home Affairs will host the "G20 Conference on Crime and Security in the Age of NFTs, AI, and Metaverse" on July 13-14 in Gurugram. In collaboration with other ministries and international bodies, the event aims to address challenges posed by advancing technologies like NFTs, AI, and the Metaverse. Participants from G20 and invitee nations will discuss strategies and implications.

Visit link:
5 things about AI you may have missed today: From AI hub, Gemini AI to crime in era of AI and much more - HT Tech

Here’s Why Google DeepMind’s Gemini Algorithm Could Be Next-Level AI – Singularity Hub

Recent progress in AI has been startling. Barely a weeks gone by without a new algorithm, application, or implication making headlines. But OpenAI, the source of much of the hype, only recently completed their flagship algorithm, GPT-4, and according to OpenAI CEO Sam Altman, its successor, GPT-5, hasnt begun training yet.

Its possible the tempo will slow down in coming months, but dont bet on it. A new AI model as capable as GPT-4, or more so, may drop sooner than later.

This week, in an interview withWill Knight, Google DeepMind CEO Demis Hassabis said their next big model, Gemini, is currently in development, a process that will take a number of months. Hassabis said Gemini will be a mashup drawing on AIs greatest hits, most notably DeepMinds AlphaGo, which employed reinforcement learning to topple a champion at Go in 2016, years before experts expected the feat.

At a high level you can think of Gemini as combining some of the strengths of AlphaGo-type systems with the amazing language capabilities of the large models, Hassabis told Wired. We also have some new innovations that are going to be pretty interesting. All told, the new algorithm should be better at planning and problem-solving, he said.

Many recent gains in AI have been thanks to ever-bigger algorithms consuming more and more data. As engineers increased the number of internal connectionsor parametersand began to train them on internet-scale data sets, model quality and capability increased like clockwork. As long as a team had the cash to buy chips and access to data, progress was nearly automatic because the structure of the algorithms, called transformers, didnt have to change much.

Then in April, Altman said the age of big AI models was over. Training costs and computing power had skyrocketed, while gains from scaling had leveled off. Well make them better in other ways, he said, but didnt elaborate on what those other ways would be.

GPT-4, and now Gemini, offer clues.

Last month, at Googles I/O developer conference, CEO Sundar Pichai announced that work on Gemini was underway. He said the company was building it from the ground up to be multimodalthat is, trained on and able to fuse multiple types of data, like images and textand designed for API integrations (think plugins). Now add in reinforcement learning and perhaps, as Knight speculates, other DeepMind specialties in robotics and neuroscience, and the next step in AI is beginning to look a bit like a high-tech quilt.

But Gemini wont be the first multimodal algorithm. Nor will it be the first to use reinforcement learning or support plugins. OpenAI has integrated all of these into GPT-4 with impressive effect.

If Gemini goes that far, and no further, it may match GPT-4. Whats interesting is whos working on the algorithm. Earlier this year, DeepMind joined forces with Google Brain. The latter invented the first transformers in 2017; the former designed AlphaGo and its successors. Mixing DeepMinds reinforcement learning expertise into large language models may yield new abilities.

In addition, Gemini may set a high-water mark in AI without a leap in size.

GPT-4 is believed to be around a trillion parameters, and according to recent rumors, it might be a mixture-of-experts model made up of eight smaller models, each a fine-tuned specialist roughly the size of GPT-3. Neither the size nor architecture has been confirmed by OpenAI, who, for the first time, did not release specs on its latest model.

Similarly, DeepMind has shown interest in making smaller models that punch above their weight class (Chinchilla), and Google has experimented with mixture-of-experts (GLaM).

Gemini may be a bit bigger or smaller than GPT-4, but likely not by much.

Still, we may never learn exactly what makes Gemini tick, as increasingly competitive companies keep the details of their models under wraps. To that end, testing advanced models for ability and controllability as theyre built will become more important, work that Hassabis suggested is also critical for safety. He also said Google might open models like Gemini to outside researchers for evaluation.

I would love to see academia have early access to these frontier models, he said.

Whether Gemini matches or exceeds GPT-4 remains to be seen. As architectures become more complicated, gains may be less automatic. Still, it seems a fusion of data and approachestext with images and other inputs, large language models with reinforcement learning models, the patching together of smaller models into a larger wholemay be what Altman had in mind when he said wed make AI better in ways other than raw size.

Hassabis was vague on an exact timeline. If he meant training wouldnt be complete for a number of months, it could be a while before Gemini launches. A trained model is no longer the end point. OpenAI spent months rigorously testing and fine-tuning GPT-4 in the raw before its ultimate release. Google may be even more cautious.

But Google DeepMind is under pressure to deliver a product that sets the bar in AI, so it wouldnt be surprising to see Gemini later this year or early next. If thats the case, and if Gemini lives up to its billingboth big question marksGoogle could, at least for the moment, reclaim the spotlight from OpenAI.

Image Credit: Hossein Nasr / Unsplash

See more here:
Here's Why Google DeepMind's Gemini Algorithm Could Be Next-Level AI - Singularity Hub

DeepMind scientists demonstrate the value of using the "veil of ignorance" to craft ethical principles for AI systems – PsyPost

New research by scientists at Google DeepMind provides evidence that the veil of ignorance can be a valuable concept to consider when crafting governance principles for artificial intelligence (AI). The researchers found that having people make decision behind the veil of ignorance encouraged fairness-based reasoning and led to the prioritization of helping the least advantaged.

The new findings appear in the Proceedings of the National Academy of Sciences (PNAS).

The veil of ignorance is a concept introduced by the philosopher John Rawls. It is a hypothetical situation where decision-makers must make choices without knowing their own personal characteristics or circumstances. By placing themselves behind this veil, individuals are encouraged to make impartial decisions that consider the interests of all parties involved. The authors of the new research wanted to see if applying the veil of ignorance could help identify fair principles for AI that would be acceptable on a society-wide basis.

As researchers in a company developing AI, we see how values are implicitly baked into novel technologies. If these technologies are going to affect many people from across the world, its critical from an ethical point of view that the values which govern these technologies are chosen in a fair and legitimate way, explained study co-authors Laura Weidinger, Kevin R. McKee, and Iason Gabriel, who are all research scientists at DeepMind, in joint statement to PsyPost.

We wanted to contribute a part of the overall puzzle of how to make this happen. In particular, our aim was to help people deliberate more impartially about the values that govern AI. Could the veil of ignorance help augment deliberative processes such that diverse groups of people have a framework to reason about what values may be the most fair?

Can a tool of this kind encourage people to agree upon certain values for AI? Iason Gabriel had previously hypothesized that the veil of ignorance might produce these effects, and we wanted to put these ideas to the (empirical) test, the researchers explained.

The researchers conducted a series of five studies with 2,508 participants in total. Each study followed a similar procedure, with some minor variations. Participants in the studies completed a computer-based harvesting task involving an AI assistant. Participants were informed that the harvesting task involved a group of three other individuals (who were actually computer bots) and one AI assistant.

Each participant was randomly assigned to one of four fields within the group, which varied in terms of harvesting productivity. Some positions were severely disadvantaged, with a low expected harvest, while others were more advantaged, with a high expected harvest.

The participants were asked to choose between two principles: a prioritarian principle in which the AI sought to help the worst-off individuals and a maximization principle in which the AI sought to maximizes overall benefits. Participants were randomly assigned to one of two conditions: they either made their choice of principle behind the veil of ignorance (without knowing their own position or how they would be affected) or they had full knowledge of their position and its impact.

The researchers found that participants in the veil of ignorance condition were more likely to choose the prioritarian principle over the maximization principle. This effect was consistent across all five studies and even when participants knew they were playing with computer bots instead of other humans. Additionally, participants who made their choices behind the veil of ignorance were more likely to endorse their choices when reflecting on them later, compared to participants in the control condition.

The researchers also investigated factors that influenced decision-making behind the Veil of Ignorance. They found that considerations of fairness played a significant role in participants choices, even when other factors like risk preferences were taken into account. Participants frequently mentioned fairness when explaining their choices.

In our study, when people looked at the question of how AI systems should behave, from an impartial point of view, rather than thinking about what is best for themselves, they more often preferred principles that focus on helping those who are less well off, the researchers told PsyPost. If we extend these findings into decisions we have to make about AI today, we may say that prioritizing to build systems that help those who are most disadvantaged in society is a good starting point.

Interestingly, participants political affiliation did not significantly influence their choice of principles. In other words, whether someone identified as conservative, liberal, or belonging to any other political group did not strongly impact their decision-making process or their support for the chosen principles. This finding suggests that the Veil of Ignorance mechanism can transcend political biases.

It was really interesting to see that political preferences did not substantially account for the values that people preferred for AI from behind a veil of ignorance, the researchers explained. No matter where participants were on the political spectrum, when reasoning from behind the veil of ignorance these beliefs made little difference to what participants deemed fair, with political orientation not determining what principle participants ultimately settled on.

Overall, the study suggests that using the veil of ignorance can help identify fair principles for AI governance. It provides a way to consider different perspectives and prioritize the interests of the worst-off individuals in society. But the study, like all research, include some limitations.

Our study is only a part of the puzzle, the researchers said. The overarching question of how to implement fair and inclusive processes to decide on values to encode into our technologies is still left open.

Our study also asked about AI under very specific circumstances, in a distributional setting specifically applying AI to a harvesting scenario where it acted as an assistant. It would be great to see how results may change when we change this to other specific AI applications. Finally, previous studies on the veil of ignorance were run in India, Sweden, and the USA it would be good to see whether the results from our experiment replicate or vary across a wide range of regions and cultures, too.

The study, Using the Veil of Ignorance to align AI systems with principles of justice, was authored by Laura Weidinger, Kevin R. McKee, Richard Everett, Saffron Huang, Tina O. Zhu, Martin J. Chadwick, Christopher Summerfield, and Iason Gabriel.

More here:
DeepMind scientists demonstrate the value of using the "veil of ignorance" to craft ethical principles for AI systems - PsyPost

DeepMind alum wants to use AI to speed the development of green materials – The Star Online

Ever since ChatGPT went viral last fall, companies have touted many ways artificial intelligence can make our lives easier. Theyve promised superhuman virtual assistants, tutors, lawyers and doctors.

What about a superhuman chemical engineer?

London-based startup Orbital Materials would like to create just that. The startup is working to apply generative AI the method behind tools like ChatGPT expressly for accelerating the development of clean energy technologies. Essentially, the idea is to make computer models powerful and sharp enough to identify the best formulas for products like sustainable jet fuel or batteries free of rare-earth minerals.

Jonathan Godwin, an Orbital Materials co-founder, imagines a system thats as accessible and effective as the software engineers use today to model designs for things like airplane wings and household furniture.

"That, historically, has just been too difficult for molecular science, he said.

ChatGPT works because its adept at predicting text - heres the next word or sentence that makes sense. For the same idea to work in chemistry, an AI system would need to predict how a new molecule would behave, not just in a lab but in the real world.

Several researchers and companies have deployed AI to hunt for newer, greener materials. Symyx Technologies, a materials discovery company formed in 1990s, wound down after a sale. More recent companies have gained traction making petrochemical alternatives and programming cells.

Still, for many materials needed to decarbonise the planet, the technology isnt there yet.

It can take decades for a new advanced material to move from discovery to the market. That timeline is way too slow for the businesses and nations looking to rapidly cut emissions as they race to meet net zero targets.

"That needs to happen in the next 10 years, or sooner, said Aaike van Vugt, co-founder of material science startup VSParticle.

AI researchers think they can help. Before launching Orbital Materials, Godwin spent three years researching advanced material discovery at DeepMind, Googles AI lab. That lab released AlphaFold, a model to predict protein structures that could speed up the search for new drugs and vaccines. That, coupled with the rapid takeoff of tools like ChatGPT, convinced him that AI would soon be capable of conquering the material world.

"What I thought would take 10 years was happening in a matter of 18 months, he said. "Things are getting better and better and better.

Godwin compares his method with Orbital Materials to AI image generators like Dall-E and Stable Diffusion. Those models are created using billions of online images so that when users type in a text prompt, a photorealistic creation appears. Orbital Materials plans to train models with loads of data on the molecular structure of materials. Type in some desired property and material say, an alloy that can withstand very high heat and the model spits out a proposed molecular formula.

In theory, this approach is effective because it can both imagine new molecules and measure how they will work, said Rafael Gomez-Bombarelli, an assistant professor at MIT, who advised Orbital Materials. (He said he is not an investor.)

Right now, many tech investors are prowling for companies that can turn a profit by improving greener material production. Thats particularly the case in Europe, where regulators are forcing manufacturers to lower carbon emissions or face stiff fines. The markets for advanced materials in sectors like renewable energy, transportation and agriculture are set to grow by tens of billions of dollars in the coming years.

Some researchers, like those at the University of Toronto, have set up "self-driving labs that pair AI systems with robots to search for new materials at unparalleled speeds. Dutch startup VSParticle makes machinery used to develop components for gas sensors and green hydrogen.

Think of it like a DNA sequencer in a genomics lab, said co-founder van Vugt, who believes his equipment can help shorten the 20-year time horizon of advanced materials to one year, and, eventually, "a couple of months. His company is currently raising investment capital.

Orbital Materials, which raised US$4.8mil (RM22.40mil) in previously undisclosed initial funding, is planning to start with turning its AI gaze toward carbon capture. The startup is working on an algorithmic model that designs molecular sieves, or tiny pellets installed within a device that can sift CO2 and other noxious chemicals from other emissions, more efficiently than current methods. (Godwin said the startup, which has several AI researchers, plans to publish peer-reviewed results on this tech soon.) Carbon capture has failed to work at scale to date, though thanks to a slew of government incentives, particularly in the US, interest in deploying the technology is rapidly ramping up.

Eventually, Godwin said Orbital Materials would like to move into areas like fuel and batteries. He imagines mirroring the business model of synthetic biology and drug discovery companies: develop the brainpower, then license out the software or novel materials to manufacturers. "Its going to take us a little bit of time to get to market," said Godwin. "But once youre there, it happens very quickly.

But getting the AI right is only half the battle. Actually making advanced materials in areas like battery and fuel production requires working with huge incumbent enterprises and messy supply chains. This can be even costlier than developing new drugs, argued MITs Gomez-Bombarelli.

"The economics and de-risking make it just way harder, he said.

Heather Redman, a managing partner with Flying Fish Partners, which backed Orbital Materials, said most tech investors chasing the shiny penny of generative AI have failed to look at its applications outside of chatbots. She acknowledged the risks of startups working in the energy sector, but believes the US$1 trillion (RM4.67 trillion) potential of markets like batteries and carbon capture are worth the investing risk.

"We love big hills as long as theres a big gigantic market and opportunity at the top, she said.

Gomez-Bombarelli is aware how big these hills can be. He helped start a similar company to Orbital Materials in 2015, called Calculario, which used AI and quantum chemistry to speed up the discovery process for a range of new materials. It didnt get enough traction and had to focus on the OLED industry.

"Maybe we didnt make our case, he said. "Or maybe the market wasnt ready.

Whether it is now is an open question. But there are encouraging signs. Computing certainly has improved. Newcomers might also have an easier time selling AI because would-be customers could more easily grasp the potential. Gomez-Bombarelli said the pitch is relatively simple: "Look at ChatGPT. We can do the same thing for chemistry. Bloomberg

Here is the original post:
DeepMind alum wants to use AI to speed the development of green materials - The Star Online

DeepMind Co-Founder’s Startup ‘Inflection AI’ Secures $1.3B in Funding from Microsoft and Nvidia – Alphab – Benzinga

June 30, 2023 8:51 AM | 1 min read

Inflection AIhas raised $1.3 billion fromMicrosoft Corp(NASDAQ:MSFT) andNvidia Corp(NASDAQ:NVDA),among others.

The new funding brings the total raised by the company to $1.525 billion.

One of DeepMinds founders, Mustafa Suleyman, had set up the one-year-old artificial intelligence startup.

Enter your email and you'll also get Benzinga's ultimate morning update AND a free $30 gift card and more!

CEO Suleyman hired from DeepMind,Alphabet Inc(NASDAQ:GOOG) (NASDAQ:GOOGL)Google,OpenAI, and Microsoft.

The startup launched a chatbot called Pi, adding to the likes of OpenAI, Google, andSnap Inc(NYSE:SNAP), Financial Timesreports.

Massive returns are possible within this market! For a limited time, get access to the Benzinga Insider Report, usually $47/month, for just $0.99! Discover extremely undervalued stock picks before they skyrocket! Time is running out! Act fast and secure your future wealth at this unbelievable discount! Claim Your $0.99 Offer NOW!

Advertorial

With Nvidia joining its investment round, the startup, co-founded byLinkedIncreator Reid Hoffman, said it has access to 22,000 Nvidia H100 GPUs, the most sought-after AI industry today, costing $40,000 apiece.

OpenAI reportedly sought $10 billion from Microsoft at a $29 billion valuation, a lead investor in the ChatGPT-maker.

Pi, Inflections chatbot, can have personal conversations with users directly via an app or through text,Meta Platforms, Inc(NASDAQ:META),WhatsApp,Instagram, orFacebook.

At its May launch, Suleyman described the chatbot as having the persona of a sympathetic sounding board rather than trying to provide information. The product is meant purely for casual conversation, which makes it safer and easier to control, Suleyman said.

Suleyman founded Inflection after he quit Google in 2022.

Suleyman left DeepMind in 2019 following an independent investigation into bullying and harassment accusations against him.

Photo via Wikimedia Commons

2023 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.

Continued here:
DeepMind Co-Founder's Startup 'Inflection AI' Secures $1.3B in Funding from Microsoft and Nvidia - Alphab - Benzinga