Page 877«..1020..876877878879..890900..»

Alhussein Fawzi – MIT Technology Review

Connor Coley, 29, developed open-source software that uses artificial intelligence to help discover and synthesize new molecules. The suite of tools, called ASKCOS, is used in production by over a dozen pharmaceutical companies, and tens of thousands of chemists, to create new medicines, new materials, and more efficient industrial processes.

One of the largest bottlenecks in developing new molecules has long been identifying interesting candidates to test. This process has played out in more or less the same way for decades: make a small change to a known molecule, and then test the novel creation for its biological, chemical, or physical properties.

Coleys approach includes a form of generative AI for chemistry. A chemist flags which properties are of interest, and AI-driven algorithms suggest new molecules with the greatest potential to have those properties. The system does this by analyzing known molecules and their current properties, and then predicting how small structural changes are likely to result in new behaviors.

As a result, chemists should spend less time testing candidates that never pan out. The types of methods that we work on have led to factors of maybe two, three, maybe 10 [times] reduction in the number of different shots on goal you need to find something that works well, says Coley, who is now an assistant professor of chemical engineering and computer science at MIT.

Once it identifies the candidate molecules, Coleys software comes up with the best way to produce them. Even if chemists imagine or dream up a molecule, he says, figuring out how to synthesize something isnt trivial: We still have to make it.

To that end, the system gives chemists a recipe of steps to follow that are likely to result in the highest yields. Coleys future work includes figuring out how to add laboratory robots to the mix, so that even more automated systems will be able test and refine the proposed recipes by actually following them.

Original post:
Alhussein Fawzi - MIT Technology Review

Read More..

From Synopsys to Google, New EDA Tools Apply Advanced AI to IC … – All About Circuits

For years, EDA companies have claimedartificial intelligence features in their IC design tools. In the past year, however,generative AI has undergone adramatic evolutionwith platforms like ChatGPT, causing some designers to question whether previous EDA features still count as AI by today's standards.

Synopsys aims to keep pace with this accelerating field by unveilinga new extension to its Synopsys.ai EDA suite. This announcement follows the release ofGoogles DeepMind, which uses AI to accelerate its in-house chip designs. Both of these announcements indicate how advanced machine learning algorithms are shaping IC development and how they might be used as a tool for designers in such fields.

Synopsys describes itsnew extension as an AI-driven analytics tool designed to span the entire integrated circuit development process, from initial design to manufacturing and testing. To this end, the Synopsys EDA Data Analytics solution offers several features that set it apart.

First, it provides comprehensive data aggregation capabilities, pulling in data from various stages of IC design, testing, and manufacturing. This gives designersa holistic view of the entire chip development lifecycle. The tool incorporates intelligence-guided debugging and optimization, which not only speeds up design closure but also minimizes project risks. This is particularly crucial in an industry where time to market can be a make-or-break factor.

Another standout feature of the extension is its focus on fabrication yield. This tool is designed to improve fab yield for faster ramp-up and more efficient high-volume manufacturing. Additionally, the tool can uncover silicon data outliers across the semiconductor supply chain, thereby improving chip quality, yield, and throughput.

Synopsys says the new tools can alsouncover new opportunities in power, performance, and area (PPA). By leveraging advanced AI algorithms, the tool can analyze magnitudes of heterogeneous, multi-domain data to accelerate root-cause analysis.

The news from Synopsys comes on the heels of a similar announcement from Google's parent company, Alphabet.

Recently, the group announced that it would be leveraging Google's DeepMind for AI-assisted chip design for use in its data centers. DeepMind usesa concept known as circuit neural networks to treata circuit as if it were a neural network, turning edges into wires and nodes into logic gates.

Then, using classical AI techniques like simulated annealing, DeepMind searches for the most efficient configurations, looking many steps into the future to improve circuit design. Utilizing advanced AI models like AlphaZero and MuZero, which are based on reinforcement learning, DeepMind has achieved "superhuman performance" in various circuit-design tasks.

While both Synopsys and Google's DeepMind are leveraging artificial intelligence to revolutionize chip design, their approaches and focus areas are distinct.

Synopsys' newly announced solution is part of its broader Synopsys.ai EDA suite, which aims to provide designers with an end-to-end, comprehensive toolset for the entire IC chip development lifecycle. These tools aggregate and analyze data across multiple domains to enable intelligent decision-making, speed up design closure, and improve fabrication yield.

DeepMind, on the other hand, takes a more specializedapproach. It employs advanced AI models to tackle specific optimization problems within chip design. While highly effective, this approach is more narrow in scope, focusing on individual aspects of the chip design process rather than offering a comprehensive, full-stack solution. Unlike Synopsys tool, DeepMinds AI is only for the internal optimization of Googles hardware.

Featured image (modified) used courtesy of Synopsys.

Read more here:
From Synopsys to Google, New EDA Tools Apply Advanced AI to IC ... - All About Circuits

Read More..

This is Why and How Google Will Kill its Business Model – Medium

The Rara AVIS Reason Behind Their Intentions

If you cant beat it, join it.

Thats how the saying goes, and thats precisely what our dear friend Google is doing regarding AI.

But, weirdly enough, Google is taking it to the extreme, purposely contributing to the demise of its ad-base revenue model, one of the most successful businesses in the history of capitalism and the cash cow behind its trillion-dollar business.

However, you shouldnt fear for Googles integrity, this is the plan all along.

This article was originally published days ago in my free weekly newsletter, TheTechOasis.

If you want to be up-to-date with the frenetic world of AI while also feeling inspired to take action or, at the very least, to be well-prepared for the future ahead of us, this is for you.

Subscribe below to become an AI leader among your peers and receive content not present in any other platform, including Medium:

Anyone who follows the AI industry will probably agree on the fact that were approaching the death of Internet search as we know it.

And Google knows it too.

Undeniably, using ChatGPT or Claude is much quicker and more convenient than doing link-based searches.

Thus, Google had two options:

As Googles execs arent dumb, theyve naturally gone for the second option.

In fact, theres no company in the world right now more heavily focused on disrupting search with AI than, ironically, Google.

See the original post here:
This is Why and How Google Will Kill its Business Model - Medium

Read More..

The Coming Wave by Mustafa Suleyman review a tech tsunami – The Guardian

Science and nature books

The co-founder of DeepMind issues a terrifying warning about AI and synthetic biology but how seriously should we take it?

Scott Shapiro

On 22 February1946, George Kennan, an American diplomat stationed in Moscow, dictated a 5,000-word cable to Washington. In this famous telegram, Kennan warned that the Soviet Unions commitment to communism meant that it was inherently expansionist, and urged the US government to resist any attempts by the Soviets to increase their influence. This strategy quickly became known as containment and defined American foreign policy for the next 40 years.

The Coming Wave is Suleymans book-length warning about technological expansionism: in close to 300 pages, he sets out to persuade readers that artificial intelligence (AI) and synthetic biology (SB) threaten our very existence and we only have a narrow window within which to contain them before its too late. Unlike communism during the cold war, however, AI and SB are not being forced on us. We willingly adopt them because they not only promise unprecedented wealth, but solutions to our most intractable problems climate change, cancer, possibly even mortality. Suleyman sees the appeal, of course, claiming that these technologies will usher in a new dawn for humanity.

An entrepreneur and AI researcher who co-founded DeepMind in 2010, before it was acquired by Google in 2014, Suleyman is at his most compelling when illustrating the promises and perils of this new world. In breezy and sometimes breathless prose, he describes how human beings have finally managed to exert power over intelligence and life itself.

Take the AI revolution. Language models such as ChatGPT are just the beginning. Soon, Suleyman predicts, AI will discover miracle drugs, diagnose rare diseases, run warehouses, optimise traffic, and design sustainable cities. We will be able to tell a computer program to make a $1 million on Amazon in a few months and it will carry out our instructions.

The problem is that the same technologies that allow us to cure a disease could be used to cause one which brings us to the truly terrifying parts of the book. Suleyman notes that the price of genetic sequencing has plummeted, while the ability to edit DNA with technologies such as Crispr has vastly improved. Soon, anyone will be able to set up a genetics lab in their garage. The temptation to manipulate the human genome, he predicts, will be immense.

Human mutants, however, are not the only horrors awaiting us. Suleyman envisions AI and SB joining forces to enable malicious actors to concoct novel pathogens. With a 4% transmissibility rate (lower than chickenpox) and 50% case fatality rate (about the same as Ebola), an AI-designed and SB-engineered virus could cause more than a billion deaths in a matter of months.

Despite these risks, Suleyman doubts any nation will make the effort to contain these technologies. States are too dependent on their economic benefits. This is the basic dilemma: we cannot afford not to build the very technology that might cause our extinction. Sound familiar?

The Coming Wave is not about the existential threat posed by superintelligent AIs. Suleyman thinks that merely smart AIs will wreak havoc precisely because they will vastly increase human agency in a very short period. Whether via AI-generated cyber-attacks, homebrewed pathogens, the loss of jobs due to technological change, or misinformation aggravating political instability, our institutions are not ready for this tsunami of tech.

He repeatedly tells us that the wave is coming, the coming wave is coming, even the coming wave really is coming. I suppose living through the past 15 years of AI research, and becoming a multimillionaire in the process, would turn anyone into a believer. But if the past is anything to go by, AI is also known for its winters, when initial promise stalled and funding dried up for long periods. Suleyman disregards the real possibility that this will happen again, thereby giving us more time to adapt to and even stem the tide of social change.

But even if progress continues its frenetic pace, it is unlikely that societies will tolerate the ethical abuses Suleyman fears most. When a Chinese scientist revealed in 2018 that he had edited the genes of twin girls, he was sentenced to three years in prison, universally condemned, and there have been no similar reports since. The EU is set to prohibit certain forms of AI such as facial recognition in public spaces in its forthcoming AI Act. Normal legal and cultural pushback will probably slow the proliferation of the most disruptive and disturbing practices.

Despite claiming that the containment problem is the defining challenge of our era, Suleyman does not support a tech moratorium (he did just start a new AI company). Instead he sets out a series of proposals at the end of the book. They are unfortunately not reassuring.

For example, Suleyman suggests that AI companies spend 20% of R&D funds on safety research, but does not say why companies would divert capital away from rushing their new products to market. He advocates banning AI in political ads, but doing so would violate the first amendment to the US constitution. He proposes an international anti-proliferation treaty, but does not give us any indication of how it might be enforced. At one point, Suleyman hints that the US may need to coerce other countries to comply. Some measure of anti-proliferation is necessary. And, yes, lets not shy away from the facts; that means real censorship, possibly beyond national borders. I dont know exactly what he means here, but I dont like the way it sounds.

Suleyman pushes these costly proposals despite conceding that his catastrophic scenarios are tail risks. Yes, the probability of doomsday is low, but the consequences would be so catastrophic that we must treat the possibility as a clear and present danger. One very large elephant in the room is climate change. Unlike the AI apocalypse that may happen in the future, a climate emergency is happening right now. This July was the hottest on record. Containing carbon, not AI, is the defining challenge of our era. Yet here, Suleyman is strikingly and conveniently optimistic. He believes that AI will solve the climate emergency. That is a happy thought but if AI will solve the climate problem, why cant it solve the containment problem too?

If the books predictions about AI are accurate, we can safely ignore its proposals. Wait a few years and we can just ask ChatGPT-5, -6, or -7 how to handle the coming wave.

Scott Shapiro is professor of law and philosophy at Yale and author of Fancy Bear Goes Phishing (Allen Lane). The Coming Wave by Mustafa Suleyman and Michael Bhaskar is published by Bodley Head (25). To support the Guardian and Observer order your copy at guardianbookshop.com. Delivery charges may apply.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

Go here to read the rest:
The Coming Wave by Mustafa Suleyman review a tech tsunami - The Guardian

Read More..

AI Can Be an Extraordinary Force for Goodif It’s Contained – WIRED

In a quaint Regency-era office overlooking Londons Russell Square, I cofounded a company called DeepMind with two friends, Demis Hassabis and Shane Legg, in the summer of 2010. Our goal, one that still feels as ambitious and crazy and hopeful as it did back then, was to replicate the very thing that makes us unique as a species: our intelligence.

To achieve this, we would need to create a system that could imitate and then eventually outperform all human cognitive abilities, from vision and speech to planning and imagination, and ultimately empathy and creativity. Since such a system would benefit from the massively parallel processing of supercomputers and the explosion of vast new sources of data from across the open web, we knew that even modest progress toward this goal would have profound societal implications.

It certainly felt pretty far-out at the time.

But AI has been climbing the ladder of cognitive abilities for decades, and it now looks set to reach human-level performance across a very wide range of tasks within the next three years. That is a big claim, but if Im even close to right, the implications are truly profound.

Further progress in one area accelerates the others in a chaotic and cross-catalyzing process beyond anyones direct control. It was clear that if we or others were successful in replicating human intelligence, this wasnt just profitable business as usual but a seismic shift for humanity, inaugurating an era when unprecedented opportunities would be matched by unprecedented risks. Now, alongside a host of technologies including synthetic biology, robotics, and quantum computing, a wave of fast-developing and extremely capable AI is starting to break. What had, when we founded DeepMind, felt quixotic has become not just plausible but seemingly inevitable.

As a builder of these technologies, I believe they can deliver an extraordinary amount of good. But without what I call containment, every other aspect of a technology, every discussion of its ethical shortcomings, or the benefits it could bring, is inconsequential. I see containment as an interlocking set of technical, social, and legal mechanisms constraining and controlling technology, working at every possible level: a means, in theory, of evading the dilemma of how we can keep control of the most powerful technologies in history. We urgently need watertight answers for how the coming wave can be controlled and contained, how the safeguards and affordances of the democratic nation-state, critical to managing these technologies and yet threatened by them, can be maintained. Right now no one has such a plan. This indicates a future that none of us want, but its one I fear is increasingly likely.

Facing immense ingrained incentives driving technology forward, containment is not, on the face of it, possible. And yet for all our sakes, containment must be possible.

It would seem that the key to containment is deft regulation on national and supranational levels, balancing the need to make progress alongside sensible safety constraints, spanning everything from tech giants and militaries to small university research groups and startups, tied up in a comprehensive, enforceable framework. Weve done it before, so the argument goes; look at cars, planes, and medicines. Isnt this how we manage and contain the coming wave?

If only it were that simple. Regulation is essential. But regulation alone is not enough. Governments should, on the face of it, be better primed for managing novel risks and technologies than ever before. National budgets for such things are generally at record levels. Truth is, though, novel threats are just exceptionally difficult for any government to navigate. Thats not a flaw with the idea of government; its an assessment of the scale of the challenge before us. Governments fight the last war, the last pandemic, regulate the last wave. Regulators regulate for things they can anticipate.

See the article here:
AI Can Be an Extraordinary Force for Goodif It's Contained - WIRED

Read More..

Google to require disclosure of AI use in political ads – POLITICO

While the Federal Election Commission hasnt set rules on using AI in political campaign ads, in August it voted to seek public comments on whether to update its misinformation policy to include deceptive AI ads.

The Google policy change also comes as Congress is working on comprehensive legislation to set guardrails on AI, and is meeting with leaders next week in the generative AI space, including Google CEO Sundar Pichai, which owns AI subsidiary DeepMind.

The specifics: Googles latest rule update which also applies to YouTube video ads requires all verified advertisers to prominently disclose whether their ads contain synthetic content that inauthentically depicts real or realistic-looking people or events. The company mandates the disclosure be clear and conspicuous on the video, image or audio content. Such disclosure language could be this video content was synthetically generated, or this audio was computer generated, the company said.

A disclosure wouldnt be required if AI tools were used in editing techniques, like resizing or cropping, or in background edits that dont create realistic interpretations of actual events.

Political ads that dont have disclosures will be blocked from running or later removed if they evaded initial detection, said a Google spokesperson, but advertisers can appeal, or resubmit their ads with disclosures.

Elections worldwide: Googles policy updates its existing election ads rules in regions outside the U.S. as well, including Europe, India and Brazil which all have elections in 2024 as well. It will also apply to advertisements using deepfakes, which are videos or images that have been synthetically created to mislead, that are banned under the companys existing misrepresentation policy.

Facebook currently doesnt require the disclosure of synthetic or AI-generated content in its ads policies. It does have a policy banning manipulated media in videos that are not in advertisements, and bans the use of deepfakes.

Link:
Google to require disclosure of AI use in political ads - POLITICO

Read More..

Sharon Li – MIT Technology Review

Connor Coley, 29, developed open-source software that uses artificial intelligence to help discover and synthesize new molecules. The suite of tools, called ASKCOS, is used in production by over a dozen pharmaceutical companies, and tens of thousands of chemists, to create new medicines, new materials, and more efficient industrial processes.

One of the largest bottlenecks in developing new molecules has long been identifying interesting candidates to test. This process has played out in more or less the same way for decades: make a small change to a known molecule, and then test the novel creation for its biological, chemical, or physical properties.

Coleys approach includes a form of generative AI for chemistry. A chemist flags which properties are of interest, and AI-driven algorithms suggest new molecules with the greatest potential to have those properties. The system does this by analyzing known molecules and their current properties, and then predicting how small structural changes are likely to result in new behaviors.

As a result, chemists should spend less time testing candidates that never pan out. The types of methods that we work on have led to factors of maybe two, three, maybe 10 [times] reduction in the number of different shots on goal you need to find something that works well, says Coley, who is now an assistant professor of chemical engineering and computer science at MIT.

Once it identifies the candidate molecules, Coleys software comes up with the best way to produce them. Even if chemists imagine or dream up a molecule, he says, figuring out how to synthesize something isnt trivial: We still have to make it.

To that end, the system gives chemists a recipe of steps to follow that are likely to result in the highest yields. Coleys future work includes figuring out how to add laboratory robots to the mix, so that even more automated systems will be able test and refine the proposed recipes by actually following them.

Original post:
Sharon Li - MIT Technology Review

Read More..

Disinformation wars: The fight against fake news in the age of AI – New Scientist

IN OCTOBER 2021, Phil Howard, an internet researcher at the University of Oxford, was alerted to a preposterous story on social media. It alleged that the covid-19 pandemic was started by a shipment of Maine lobsters that arrived in Wuhan, China, days before the first outbreak. He and his colleagues spent months trying to track down the source and didnt get to the bottom of it except that it probably originated in China, possibly through the state-owned TV channel CGTN.

I felt my career had hit a new low, says Howard. What was so ridiculous was the enormous effort that we needed to expose a ridiculous attempt to manipulate public opinion. I realised that I didnt want to do that work myself, so I decided to try and come up with an initiative that would do something about the problem in a systematic way.

Today, Howard is chair of a new organisation called the International Panel on the Information Environment, one of many initiatives pushing back against the pollution of the information ecosystem. Regulators, too, are finally lacing up their own boots after spending years sitting on their hands.

The stakes couldnt be higher, with the recent rise of generative artificial intelligence and its capacity to produce persuasive disinformation on an industrial scale. Many researchers are saying that the next two years are make or break in the information wars, as deep-pocketed bad actors escalate their disinformation campaigns, while the good guys fight back. Which side prevails will determine how the information environment and everything it shapes, from peoples beliefs about vaccines to the outcomes of elections

Read the original here:
Disinformation wars: The fight against fake news in the age of AI - New Scientist

Read More..

Thalassophobia (Fear Of The Ocean): Symptoms And Treatment – Forbes

Thalassophobia is an intense fear of deep or large bodies of water, like the ocean or the sea. It falls under a category of specific phobia, which is defined as an intense, irrational fear of something that poses little or no actual danger, and affects 9.1% of U.S. adults, according to the National Institute of Mental Health.

A person would know they have thalassophobia because when they are exposed to deep open water or the ocean, whether in person or even just by thinking about it or seeing pictures, they would experience a number of possible physical and psychological symptoms, says Deborah Courtney, Ph.D., a licensed psychotherapist with a private practice in New York and member of the Forbes Health Advisory Board.

Thalassophobia is categorized under phobias of the natural environment, such as water or heights. Other specific phobia categories as defined by the Diagnostic and Statistical Manual of Mental Disorders (DSM-5-TR), the text U.S. mental health practitioners refer to for making mental health diagnoses, include:

Often thalassophobia (and most phobias) can be traced back to an early life trauma, which could be processed through psychodynamic therapy, which explores how early life experiences create templates that constrict how we live in the future, explains Sabrina Romanoff, Psy.D., a Harvard-trained clinical psychologist, professor and researcher based in New York and a member of the Forbes Health Advisory Board. With therapy, people can learn to take risks by challenging those templates and revise them to be more aligned with their current reality and values, she says.

Continued here:
Thalassophobia (Fear Of The Ocean): Symptoms And Treatment - Forbes

Read More..

Pranav Rajpurkar – MIT Technology Review

Connor Coley, 29, developed open-source software that uses artificial intelligence to help discover and synthesize new molecules. The suite of tools, called ASKCOS, is used in production by over a dozen pharmaceutical companies, and tens of thousands of chemists, to create new medicines, new materials, and more efficient industrial processes.

One of the largest bottlenecks in developing new molecules has long been identifying interesting candidates to test. This process has played out in more or less the same way for decades: make a small change to a known molecule, and then test the novel creation for its biological, chemical, or physical properties.

Coleys approach includes a form of generative AI for chemistry. A chemist flags which properties are of interest, and AI-driven algorithms suggest new molecules with the greatest potential to have those properties. The system does this by analyzing known molecules and their current properties, and then predicting how small structural changes are likely to result in new behaviors.

As a result, chemists should spend less time testing candidates that never pan out. The types of methods that we work on have led to factors of maybe two, three, maybe 10 [times] reduction in the number of different shots on goal you need to find something that works well, says Coley, who is now an assistant professor of chemical engineering and computer science at MIT.

Once it identifies the candidate molecules, Coleys software comes up with the best way to produce them. Even if chemists imagine or dream up a molecule, he says, figuring out how to synthesize something isnt trivial: We still have to make it.

To that end, the system gives chemists a recipe of steps to follow that are likely to result in the highest yields. Coleys future work includes figuring out how to add laboratory robots to the mix, so that even more automated systems will be able test and refine the proposed recipes by actually following them.

View post:
Pranav Rajpurkar - MIT Technology Review

Read More..