Category Archives: Deep Mind
Deep Calls to Deep: Eight Years of Soul-Nurturing for Preachers – Virginia Theological Seminary
Deep Calls to Deep, a program to nourish working preachers, began in 2015, funded by a generous grant from the Lilly Endowment. Now, as the grant funding comes to an end this fall, it is a good opportunity to look back on what the program has accomplished.
The fundamental principle of Deep Calls to Deep is that preaching is soul-work, and that rejuvenating preachers depends on feeding the whole personbody, mind, and spirit. We offered this nourishment by focusing on four themes:
the spirituality of preaching (the relationship with God which is the foundation of preaching) nurturing the preaching imagination (cultivating the ability to encounter texts and world creatively) embodiment (bringing the preachers whole self to proclamation) community (conveying the conviction that preaching is not a solitary activity but is nourished by relationships).
Since the program started there have been over 150 participants, who have come to VTS for on-campus residencies that offered time for sabbath, study, engagement with the four themes of the program, worship together, and fellowship. The participants also met for a year in small peer groups to preach for each other and receive feedback on their preaching. I have been so moved to see the diligence and passion that fuels these preachers in their vital work, and to see how the communities formed in Deep Calls to Deep have revitalized the participants.
Though the grant funding for this program comes to an end this fall, we are exploring ways for the work and principles of Deep Calls to Deep to continue. VTS is a seminary that values preaching and preachersafter all, the words on our chapel wall are Go ye into all the world and preach the Gospeland we will keep finding ways to nourish and support working preachers in the challenges and joys of proclamation. Stay tuned!
The Rev. Ruthanna Hooke, Ph.D.Director, Deep Calls to Deep
Back to all
Link:
Deep Calls to Deep: Eight Years of Soul-Nurturing for Preachers - Virginia Theological Seminary
Multi-AI collaboration helps reasoning and factual accuracy in large … – MIT News
An age-old adage, often introduced to us during our formative years, is designed to nudge us beyond our self-centered, nascent minds: "Two heads are better than one." This proverb encourages collaborative thinking and highlights the potency of shared intellect.
Fast forward to 2023, and we find that this wisdom holds true even in the realm of artificial intelligence: Multiple language models, working in harmony, are better than one.
Recently, a team from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) embodied this ancient wisdom within the frontier of modern technology. They introduced a strategy that leverages multiple AI systems to discuss and argue with each other to converge on a best-possible answer to a given question. This method empowers these expansive language models to heighten their adherence to factual data and refine their decision-making.
The crux of the problem with large language models (LLMs) lies in the inconsistency of their generated responses, leading to potential inaccuracies and flawed reasoning. This new approach lets each agent actively assess every other agents responses, and uses this collective feedback to refine its own answer. In technical terms, the process consists of multiple rounds of response generation and critique. Each language model generates an answer to the given question, and then incorporates the feedback from all other agents to update its own response. This iterative cycle culminates in a final output from a majority vote across the models' solutions. It somewhat mirrors the dynamics of a group discussion where individuals contribute to reach a unified and well-reasoned conclusion.
One real strength of the approach lies in its seamless application to existing black-box models. As the methodology revolves around generating text, it can also be implemented across various LLMs without needing access to their internal workings. This simplicity, the team says, could help researchers and developers use the tool to improve the consistency and factual accuracy of language model outputs across the board.
Employing a novel approach, we dont simply rely on a single AI model for answers. Instead, our process enlists a multitude of AI models, each bringing unique insights to tackle a question. Although their initial responses may seem truncated or may contain errors, these models can sharpen and improve their own answers by scrutinizing the responses offered by their counterparts," says Yilun Du, an MIT PhD student in electrical engineering and computer science, affiliate of MIT CSAIL, and lead author on a new paper about the work. "As these AI models engage in discourse and deliberation, they're better equipped to recognize and rectify issues, enhance their problem-solving abilities, and better verify the precision of their responses. Essentially, we're cultivating an environment that compels them to delve deeper into the crux of a problem. This stands in contrast to a single, solitary AI model, which often parrots content found on the internet. Our method, however, actively stimulates the AI models to craft more accurate and comprehensive solutions."
The research looked at mathematical problem-solving, including grade school and middle/high school math problems, and saw a significant boost in performance through the multi-agent debate process. Additionally, the language models showed off enhanced abilities to generate accurate arithmetic evaluations, illustrating potential across different domains.
The method can also help address the issue of "hallucinations" that often plague language models. By designing an environment where agents critique each other's responses, they were more incentivized to avoid spitting out random information and prioritize factual accuracy.
Beyond its application to language models, the approach could also be used for integrating diverse models with specialized capabilities. By establishing a decentralized system where multiple agents interact and debate, they could potentially use these comprehensive and efficient problem-solving abilities across various modalities like speech, video, or text.
While the methodology yielded encouraging results, the researchers say that existing language models may face challenges with processing very long contexts, and the critique abilities may not be as refined as desired. Furthermore,the multi-agent debate format, inspired by human group interaction, has yet to incorporate the more complex forms of discussion that contribute to intelligent collective decision-making a crucial area for future exploration, the team says. Advancing the technique could involve a deeper understanding of the computational foundations behind human debates and discussions, and using those models to enhance or complement existing LLMs.
"Not only does this approach offer a pathway to elevate the performance of existing language models, but it also presents an automatic means of self-improvement. By utilizing the debate process as supervised data, language models can enhance their factuality and reasoning autonomously, reducing reliance on human feedback and offering a scalable approach to self-improvement," says Du. "As researchers continue to refine and explore this approach, we can get closer to a future where language models not only mimic human-like language but also exhibit more systematic and reliable thinking, forging a new era of language understanding and application."
"It makes so much sense to use a deliberative process to improve the model's overall output, and it's a big step forward from chain-of-thought prompting," says Anca Dragan, associate professor at the University of California at Berkeleys Department of Electrical Engineering and Computer Sciences, who was not involved in the work. "I'm excited about where this can go next. Can people better judge the answers coming out of LLMs when they see the deliberation, whether or not it converges? Can people arrive at better answers by themselves deliberating with an LLM? Can a similar idea be used to help a user probe a LLM's answer in order to arrive at a better one?"
Du wrote the paper with three CSAIL affiliates: Shuang Li SM '20, PhD '23; MIT professor of electrical engineering and computer science Antonio Torralba; and MIT professor of computational cognitive science and Center for Brains, Minds, and Machines member Joshua Tenenbaum. Google DeepMind researcher Igor Mordatch was also a co-author.
The rest is here:
Multi-AI collaboration helps reasoning and factual accuracy in large ... - MIT News
You Don’t Have to Pick a Winner in Artificial Intelligence (AI). Here’s … – The Motley Fool
There's no shortage of hype over artificial intelligence (AI) this year.
The launch of OpenAI's ChatGPT in late 2022 made it clear to tech CEOs and individual investors alike how powerful and potentially transformative the new generative AI technology is.
Nearly every tech company seems to be talking up the potential of AI, and AI stocks have consequently skyrocketed this year. In some ways, the impact of the new wave of demand for AI is already being felt. Nvidia's revenue nearly doubled in its most recent quarter on soaring demand for AI chips, and the company posted another round of blowout guidance for its fiscal third quarter. Oracle, meanwhile, is seeing strong growth in its cloud infrastructure division after spending billions on chips to power its superclusters.
However, if you're thinking of dumping your cash into Nvidia or another AI stock, you should think again. This is still a brand-new, emerging industry, and most of the companies touting the potential of AI have yet to see a significant financial benefit.
As AI stocks soar, investors should also be mindful of the dot-com bubble when the introduction of the World Wide Web unleashed a similar transformative technology on the stock market. Many of those tech stocks ultimately went bust, while only a small number went on to be big winners.
Rather than trying to pick a winner in AI, there are better ways to approach the emerging technology.
One of the best ways to get diversification in a particular sector or a class of stocks is through a basket approach, which means buying several stocks so that you're not overly exposed to one particular company. If you'd like to invest a substantial percentage of your holdings in AI, this is a more balanced approach versus buying a single stock.
AI is a broad category, so there are a lot of different ways you could put together a basket. One way to do it might be by taking a few stocks from each of the subsectors that are exposed to AI.
For example, you'll want to invest in semiconductor stocks. Nvidia is an easy one, and you could consider another one or two like Advanced Micro Devices,Broadcom, or evenTaiwan Semiconductorto get exposure to semiconductor manufacturing.
Big tech is another subsector that's worth including. Here, Microsoft and Alphabet are obvious choices given Microsoft's partnership with OpenAI and Alphabet's launch of Bard and its other AI investments, including Google DeepMind, its AI research lab.
Finally, you may want to consider adding stocks that have put AI at the center of their business models, such as Upstartin consumer loans, Lemonade in insurance, orC3.aiin application software.
If you'd like to have the work of managing a basket of AI stocks done for you, the easiest way to do that is by buying an AI ETF. The largest AI ETF on the market is theGlobal X Robotics & Artificial Intelligence ETF(BOTZ -0.55%) with net assets of $2.2 billion.
BOTZ's biggest holding is currently Nvidia with 14.1% of the ETF's assets. Other top holdings include medical device maker Intuitive Surgical (9.7%); ABB, a Swiss company that creates automation and robotics products used in utilities and infrastructure (8.2%); and Keyence, a Japanese company that builds factory automation products like sensors and scanners (6.9%).
It's easy to be blinded by the opportunity in AI, but you shouldn't invest in the space without considering the risks. Keep valuations and realistic prospects in mind as you decide which strategy best suits you in the sector, and remember the lessons of the dot-com bubble and other more recent bubbles in the stock market.
While some AI stocks could be big winners, others will almost certainly be busts.
Suzanne Frey, an executive at Alphabet, is a member of The Motley Fools board of directors. Jeremy Bowman has positions in Lemonade and Upstart. The Motley Fool has positions in and recommends Abb, Advanced Micro Devices, Alphabet, Intuitive Surgical, Lemonade, Microsoft, Nvidia, Oracle, Taiwan Semiconductor Manufacturing, and Upstart. The Motley Fool recommends Broadcom and C3.ai. The Motley Fool has a disclosure policy.
Continue reading here:
You Don't Have to Pick a Winner in Artificial Intelligence (AI). Here's ... - The Motley Fool
Why generative AI is ‘alchemy,’ not science – VentureBeat
A New York Times article this morning, titled How to Tell if Your AI Is Conscious, says that in a new report, scientists offer a list of measurable qualities based on a brand-new science of consciousness.
The article immediately jumped out at me, as it was published just a few days after I had a long chat with Thomas Krendl Gilbert, a machine ethicist who, among other things, has long studied the intersection of science and politics. Gilbert recently launched a new podcast, called The Retort, along with Hugging Face researcher Nathan Lambert, with an inaugural episode that pushes back on the idea of todays AI as a truly scientific endeavor.
Gilbert maintains that much of todays AI research cannot reasonably be called science at all. Instead, it can be viewed as a new form of alchemy that is, the medieval forerunner of chemistry, that can also be defined as a seemingly magical process of transformation.
Many critics of deep learning and of large language models, including those who built them, sometimes refer to AI as a form of alchemy, Gilbert told me on a video call. What they mean by that, he explained, is that its not scientific, in the sense that its not rigorous or experimental. But he added that he actually means something more literal when he says that AI is alchemy.
The people building it actually think that what theyre doing is magical, he said. And thats rooted in a lot of metaphors, ideas that have now filtered into public discourse over the past several months, like AGI and super intelligence. The prevailing idea, he explained, is that intelligence itself is scalar depending only on the amount of data thrown at a model and the computational limits of the model itself.
But, he emphasized, like alchemy, much of todays AI research is not necessarily trying to bewhat we know as science, either. The practice of alchemy historically had no peer review or public sharing of results, for example. Much of todays closed AI research does not, either.
It was very secretive, and frankly, thats how AI works right now, he said. Its largely a matter of assuming magical properties about the amount of intelligence that is implicit in the structure of the internet and then building computation and structuring it such that you can distill that web of knowledge that weve all been building for decades now, and then seeing what comes out.
I was particularly interested in Gilberts thoughts on alchemy given the current AI discourse, which seems to me to include some doozies of cognitive dissonance: There was the Senates closed-door AI Insight Forum, where Elon Musk called for AI regulators to serve as a referee to keep AI safe, while actively working on using AI to put microchips in human brains and making humans a multiplanetary species. There was the EU parliament saying that AI extinction risk should be a global priority, while at the same time, OpenAI CEO Sam Altman said hallucinations can be seen as positive part of the magic of generative AI and that superintelligence is simply an engineering problem.
And there was DeepMind co-founder Mustafa Suleyman, who would not explain to MIT Technology Review how his company Inflections Pi manages to refrain from toxic output Im not going to go into too many details because its sensitive, he said while calling on governments to regulate AI and appoint cabinet-level tech ministers.
Its enough to make my head spin but Gilberts take on AI as alchemy put these seemingly opposing ideas into perspective.
Gilbert clarified that he isnt saying that the notion of AI as alchemy is wrong but that its lack of scientific rigor needs to be called what it really is.
Theyre building systems that are arbitrarily intelligent, not intelligent in the way that humans are whatever that means but just arbitrarily intelligent, he explained. Thats not a well-framed problem, because its assuming something about intelligence that we have very little or no evidence of, that is an inherently mystical or supernatural claim.
AI builders, he continued, dont need to know what the mechanisms are that make the technology work, but they are interested enough and motivated enough and frankly, also have the resources enough to just play with it.
The magic of generative AI, he added, doesnt come from the model. The magic comes from the way the model is matched to the interface. The magic people like so much is that I feel like Im talking to a machine when I play with ChatGPT. Thats not a property of the model, thats a property of ChatGPT of the interface.
In support of this idea, researchers at Alphabets AI division DeepMind recently published work showing that AI can optimize its own prompts and performs better when prompted to take a deep breath and work on this problem step-by-step, though the researchers are unclear exactly why this incantation works as well as it does (especially given the fact that an AI model does not actually breathe at all.)
One of the major consequences of the alchemy of AI is when it intersects with politics as it is now with discussions around AI regulation in the US and the EU, said Gilbert.
In politics, what were trying to do is articulate a notion of what is good to do, to establish the grounds for consensus that is fundamentally whats at stake in the hearings right now, he said. We have a very rarefied world of AI builders and engineers, who are engaged in the stance of articulating what theyre doing and why it matters to the people that we have elected to represent our political interests.
The problem is that we can only guess at the work of Big Tech AI builders, he said. Were living in a weird moment, he explained, where the metaphors that compare AI to human intelligence are still being used, but the mechanisms are not remotely well understood.
In AI, we dont really know what the mechanisms are for these models, but we still talk about them like theyre intelligent. We still talk about them liketheres some kind of anthropological ground that is being uncovered and theres truly no basis for that.
But while there is no rigorous scientific evidence backing for many of the claims to existential risk from AI, that doesnt mean they arent worthy of investigation, he cautioned. In fact, I would argue that theyre highly worthy of investigation scientifically [but] when those things start to be framed as a political project or a political priority, thats a different realm of significance.
Meanwhile, the open source generative AI movement led by the likes of Meta Platforms with its Llama models, along other smaller startups such as Anyscale and Deci is offering researchers, technologists, policymakers and prospective customers a clearer window onto the inner workings of the technology. But translating the research into non-technical terminology that laypeople including lawmakers can understand, remains a significant challenge.
That is the key problem with the fact that AI, as alchemy and not science, has become a political project, Gilbert explained.
Its a laxity of public rigor, combined with a certain kind of willingness to keep your cards close to your chest, but then say whatever you want about your cards in public with no robust interface for interrelating the two, he said.
Ultimately, he said, the current alchemy of AI can be seen as tragic.
There is a kind of brilliance in the prognostication, but its not clearly matched to a regime of accountability, he said. And without accountability, you get neither good politics nor good science.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
Read more here:
Why generative AI is 'alchemy,' not science - VentureBeat
Q&A with AI expert and DeepMind co-founder Mustafa Suleyman: ‘Things are about to be very different’ – Yahoo Finance
In a world filled with newly minted AI experts, Mustafa Suleyman is one of the OGs.
In 2010, Suleyman co-founded AI startup DeepMind, which we now know as Google (GOOG, GOOGL) DeepMind. Adjusted for inflation, Google's 2014 acquisition of DeepMind reportedly would have been worth more than half a billion dollars today.
And Suleyman has kept going. In March 2022, after years at Google and VC firm Greylock Partners, Suleyman teamed up with Reid Hoffman LinkedIn co-founder, former COO at PayPal, and Silicon Valley legend to found Inflection AI.
Already, Inflection AI has gained attention from investors, this summer raising $1.3 billion from big names such as Microsoft (MSFT) and Nvidia (NVDA).
Suleyman also found the time to write a book, called "The Coming Wave: Technology, Power, and the 21st Century's Greatest Dilemma."
Yahoo Finance spoke with Suleyman on Friday in the midst of his book tour about AI's present and future. Inflection AI's chatbot, Pi, also made an appearance.
The following conversation has been edited for length and clarity.
Mustafa Suleyman, co-founder and CEO of Inflection AI, in Toronto, Canada. (Piaras Mdheach/Sportsfile for Collision via Getty Images)
You've been working in and thinking about AI for decades. Why write this book now?
In the last two or three years, we've been able to see the impact of multiple compounding exponentials. ... We're at an inflection point, and that's why I wrote the book. Because it's just quite obvious that things are about to be very different in the next five to 10 years.
Hundreds of millions of people will get access to intelligence, which is going to become a commodity. It's going to be cheap and easy to have an aid in your pocket. It'll be a friendly companion, but it'll also be a teacher. It'll be a coach. It'll be a scheduler, an organizer, a therapist, and an adviser. That's going to change everything.
I want to start by looking at the big picture. This AI wave is coming whether we like it or not, so how should we think about it?
We've always faced new technologies that, at first, seem really daunting and as though they're going to upend everything in a bad way. When airplanes first arrived, people thought they were completely insane, and that they'd always be really dangerous and unreliable. It took many years to get widespread adoption, for them to become safe enough that people feel comfortable on them. We're really just getting adjusted to what these new technologies can do, how they can help, what their risks are, and managing a new type of risk.
Story continues
A spectator takes photos of a humanoid robot at the 2023 World Robot Conference in Beijing, China, Aug. 17, 2023. (Costfoto/NurPhoto via Getty Images)
So, in essence, what's a good outcome for AI?
A good outcome is one in which we manage the downsides and that we unleash this technology to deliver radical abundance. Food production, energy production, materials production, transportation, healthcare, and education are going to get radically cheaper in the next 30 years.
Unfold that trajectory over the next 20 to 30 years, and there's very good reason to be optimistic on all fronts. ... The real challenge for us is going to be how do we manage abundance? How do we handle the distribution and govern this new power and make sure it remains accountable? But the upside is unbelievable.
What does the negative outcome for AI look like, where the risks run away from us?
The risk is a one of the proliferation of power. We give up power to the nation-state, and in return, we ask the nation-state to provide for us. The challenge is we're now putting the power to have an impact in the hands of hundreds of millions of people.
As in the last wave, the last 20 years, we've put the power to broadcast in the hands of millions of people. Anyone can have a Twitter account or Instagram or TikTok. Anyone can have a podcast, have a blog. That's been an amazing achievement of civilization, having the freedom to speak without having access to traditional news institutions.
Now, that same trajectory is going to take place for the ability to act in the world. ... People are going to have more agency, more power, and more influence with less capital investment. That's the nature of our globalized world, but this is an additional fragility amplifier on top of that.
To that end, tell me about your take on AI regulation, which is a key part of this book, particularly the idea of "containment."
Containment is just a simple idea that says that we should always have control over the things that we create, and they should always operate within pre-defined boundaries in repeatable, predictable ways. It should always be accountable to owners, creators, users, and ultimately democratic governments.
It's kind of just restating the obvious fact that technology shouldn't get out of our control. The whole effort here is to place guardrails and permanent constraints around technology so society collectively makes decisions about how it impacts our world. ... If we just leave this to the market, it's going to produce forces beyond our control, and that's the thing that needs to shift.
I'm both really compelled by the idea of containment and skeptical of it. Can you speak to why you believe it's possible to contain this evolution?
It's extremely tough. We haven't really done it before. When there's a technology that's useful and massively impactful in our world, it has always proliferated, it has always spread. It would be an unprecedented challenge, but if you look back through history, there are countless moments when we have confronted seemingly unprecedented moments very successfully.
Roads are a great example, they're actually incredibly regulated. Every inch of that infrastructure is regulated, including the intensity of the lumens of the light bulb and the width of every lane, everything from driver training to airbags to the car itself to emissions. You can't drive through any field or sidewalk, right? It sounds obvious, but that's what we mean by containing technology in a way.
President Joe Biden speaks about artificial intelligence in the Roosevelt Room of the White House in Washington, D.C., on July 21, 2023, with leading AI experts, including Inflection AI's Mustafa Suleyman. (ANDREW CABALLERO-REYNOLDS/AFP via Getty Images)
You've identified a new Turing Test explain.
The new Turing Test is trying to measure not just what AI can say but what it can do. The modern Turing Test I've proposed is to say that an AI, with a $100,000 investment, can go off and make one million dollars in three months.
The idea isn't to make money, but the goal is really to use money as a measure of progress towards a more powerful system. ... AI would have to go and invent a new product, use an image generator to produce a bunch of examples, produce a blueprint for its design, identify a manufacturer, call the manufacturer to negotiate, email back and forth on the contract, get the product shipped to customers, and on and on. That's a pretty challenging set of tasks.
Do you think AI is a job killer or a job creator?
It's going to change the way that we work, so fundamentally that's going to get rid of some roles. But it's also likely to create a lot of brand-new roles.
That's what we have to use taxation for. We have to manage the transition, so people will have time to find other work, train, learn new skills, and not get cut out of employment entirely. ... If you want to speed something up, then reduce the taxes. If you want to slow something down, then increase the taxes. So we do have a lever for containment, and that's adding friction with new taxes.
The final question, I gave to Inflection AI's chatbot, Pi: Should humans be afraid of AI?
Pi: That's a really common question, and it's understandable why people might be worried about AI. After all, we've seen movies like Terminator or Ex Machina where AI goes rogue and harms humans. But I think it's important to keep in mind that these are just works of fiction. In reality, AI is developed by humans and is designed to be helpful, not harmful. Sure, there are risks associated with AI, but I think they can be mitigated with proper regulations and oversight. Ultimately, I think AI can be a force for good if we develop and use it responsibly.
An excerpt from Allie Garfinkle's conversation with Inflection AI's Pi chatbot, Sept. 9, 2023.
Allie Garfinkle is a Senior Tech Reporter at Yahoo Finance. Follow her on Twitter at @agarfinks and on LinkedIn.
Click here for the latest technology news that will impact the stock market.
Read the latest financial and business news from Yahoo Finance
Continued here:
Q&A with AI expert and DeepMind co-founder Mustafa Suleyman: 'Things are about to be very different' - Yahoo Finance
Enhanced Robotic Control with DeepMind RT-2 – RTInsights
RT-2 utilizes chain-of-thought reasoning, allowing it to make multi-stage decisions, such as selecting alternative tools or beverages based on specific situations.
Google DeepMind unveiled Robotic Transformer 2 (RT-2), a vision-language-action (VLA) model designed to enhance robotic control through plain language instructions. Harnessing data from the Internet, RT-2 aims to foster robots that can adeptly navigate human environments, akin to well-known fictional robot companions from science fiction.
RT-2, drawing inspiration from how humans learn by reading and observing, relies on a vast language model akin to ChatGPT, which is trained using online text and images. This allows RT-2 to achieve the feat of generalization, enabling it to recognize patterns and perform untrained tasks.
Google showcased RT-2s proficiency by demonstrating its ability to identify and discard trash without prior training. This includes recognizing potentially ambiguous items like food packaging as trash. A separate test had a robot powered by RT-2 successfully pinpoint a dinosaur figurine when instructed to Pick up the extinct animal. These capabilities are transformative as, traditionally, robotic training has been labor-intensive, relying on extensive manual data acquisition.
See also: AI and Robotics Research Continues to Accelerate
RT-2s prowess can be attributed to Google DeepMinds adoption of transformer AI models, celebrated for their generalization capabilities. The technology is built on Googles prior AI innovations, such as the Pathways Language and Image model (PaLI-X) and the Pathways Language model Embodied (PaLM-E). Moreover, RT-2 was co-trained using data from its precursor, RT-1, gathered over 17 months.
The RT-2 framework refines a pre-trained VLM model with robotics and web data, leading to a model that processes camera images from robots and predicts subsequent actions. Interestingly, actions are represented as tokens, akin to word fragments, aiding in the robots control. This method, applied to RT-1, was also employed for RT-2, converting actions into symbolic string representations to facilitate new skill acquisition.
Additionally, RT-2 utilizes chain-of-thought reasoning, allowing it to make multi-stage decisions, such as selecting alternative tools or beverages based on specific situations. Comparative tests revealed RT-2s stellar performance in new situations, recording a 62% success rate against RT-1s 32%.
However, the model has its limitations. Although web data enhances generalization over concepts, it cannot bestow the robot with new physical skills it hasnt practiced. Google acknowledges these constraints and the considerable research journey ahead but remains optimistic, viewing RT-2 as a significant stride towards achieving general-purpose robots.
View original post here:
Enhanced Robotic Control with DeepMind RT-2 - RTInsights
From Synopsys to Google, New EDA Tools Apply Advanced AI to IC … – All About Circuits
For years, EDA companies have claimedartificial intelligence features in their IC design tools. In the past year, however,generative AI has undergone adramatic evolutionwith platforms like ChatGPT, causing some designers to question whether previous EDA features still count as AI by today's standards.
Synopsys aims to keep pace with this accelerating field by unveilinga new extension to its Synopsys.ai EDA suite. This announcement follows the release ofGoogles DeepMind, which uses AI to accelerate its in-house chip designs. Both of these announcements indicate how advanced machine learning algorithms are shaping IC development and how they might be used as a tool for designers in such fields.
Synopsys describes itsnew extension as an AI-driven analytics tool designed to span the entire integrated circuit development process, from initial design to manufacturing and testing. To this end, the Synopsys EDA Data Analytics solution offers several features that set it apart.
First, it provides comprehensive data aggregation capabilities, pulling in data from various stages of IC design, testing, and manufacturing. This gives designersa holistic view of the entire chip development lifecycle. The tool incorporates intelligence-guided debugging and optimization, which not only speeds up design closure but also minimizes project risks. This is particularly crucial in an industry where time to market can be a make-or-break factor.
Another standout feature of the extension is its focus on fabrication yield. This tool is designed to improve fab yield for faster ramp-up and more efficient high-volume manufacturing. Additionally, the tool can uncover silicon data outliers across the semiconductor supply chain, thereby improving chip quality, yield, and throughput.
Synopsys says the new tools can alsouncover new opportunities in power, performance, and area (PPA). By leveraging advanced AI algorithms, the tool can analyze magnitudes of heterogeneous, multi-domain data to accelerate root-cause analysis.
The news from Synopsys comes on the heels of a similar announcement from Google's parent company, Alphabet.
Recently, the group announced that it would be leveraging Google's DeepMind for AI-assisted chip design for use in its data centers. DeepMind usesa concept known as circuit neural networks to treata circuit as if it were a neural network, turning edges into wires and nodes into logic gates.
Then, using classical AI techniques like simulated annealing, DeepMind searches for the most efficient configurations, looking many steps into the future to improve circuit design. Utilizing advanced AI models like AlphaZero and MuZero, which are based on reinforcement learning, DeepMind has achieved "superhuman performance" in various circuit-design tasks.
While both Synopsys and Google's DeepMind are leveraging artificial intelligence to revolutionize chip design, their approaches and focus areas are distinct.
Synopsys' newly announced solution is part of its broader Synopsys.ai EDA suite, which aims to provide designers with an end-to-end, comprehensive toolset for the entire IC chip development lifecycle. These tools aggregate and analyze data across multiple domains to enable intelligent decision-making, speed up design closure, and improve fabrication yield.
DeepMind, on the other hand, takes a more specializedapproach. It employs advanced AI models to tackle specific optimization problems within chip design. While highly effective, this approach is more narrow in scope, focusing on individual aspects of the chip design process rather than offering a comprehensive, full-stack solution. Unlike Synopsys tool, DeepMinds AI is only for the internal optimization of Googles hardware.
Featured image (modified) used courtesy of Synopsys.
Read more here:
From Synopsys to Google, New EDA Tools Apply Advanced AI to IC ... - All About Circuits
Alhussein Fawzi – MIT Technology Review
Connor Coley, 29, developed open-source software that uses artificial intelligence to help discover and synthesize new molecules. The suite of tools, called ASKCOS, is used in production by over a dozen pharmaceutical companies, and tens of thousands of chemists, to create new medicines, new materials, and more efficient industrial processes.
One of the largest bottlenecks in developing new molecules has long been identifying interesting candidates to test. This process has played out in more or less the same way for decades: make a small change to a known molecule, and then test the novel creation for its biological, chemical, or physical properties.
Coleys approach includes a form of generative AI for chemistry. A chemist flags which properties are of interest, and AI-driven algorithms suggest new molecules with the greatest potential to have those properties. The system does this by analyzing known molecules and their current properties, and then predicting how small structural changes are likely to result in new behaviors.
As a result, chemists should spend less time testing candidates that never pan out. The types of methods that we work on have led to factors of maybe two, three, maybe 10 [times] reduction in the number of different shots on goal you need to find something that works well, says Coley, who is now an assistant professor of chemical engineering and computer science at MIT.
Once it identifies the candidate molecules, Coleys software comes up with the best way to produce them. Even if chemists imagine or dream up a molecule, he says, figuring out how to synthesize something isnt trivial: We still have to make it.
To that end, the system gives chemists a recipe of steps to follow that are likely to result in the highest yields. Coleys future work includes figuring out how to add laboratory robots to the mix, so that even more automated systems will be able test and refine the proposed recipes by actually following them.
Original post:
Alhussein Fawzi - MIT Technology Review
Sundar Pichai on Google’s AI, Microsoft’s AI, OpenAI, and Did We … – WIRED
We're talking about AI in a very nuts-and-bolts way, but a lot of the discussion centers on whether it will ultimately be a utopian boon or the end of humanity. Whats your stance on those long-term questions?
AI is one of the most profound technologies we will ever work on. There are short-term risks, midterm risks, and long-term risks. Its important to take all those concerns seriously, but you have to balance where you put your resources depending on the stage you're in. In the near term, state-of-the-art LLMs have hallucination problemsthey can make up things. There are areas where thats appropriate, like creatively imagining names for your dog, but not whats the right medicine dosage for a 3-year-old? So right now, responsibility is about testing it for safety and ensuring it doesn't harm privacy and introduce bias. In the medium term, I worry about whether AI displaces or augments the labor market. There will be areas where it will be a disruptive force. And there are long-term risks around developing powerful intelligent agents. How do we make sure they are aligned to human values? How do we stay in control of them? To me, they are all valid things.
Have you seen the movie Oppenheimer?
I'm actually reading the book. I'm a big fan of reading the book before watching the movie.
I ask because you are one of the people with the most influence on a powerful and potentially dangerous technology. Does the Oppenheimer story touch you in that way?
All of us who are in one shape or another working on a powerful technologynot just AI, but genetics like Crisprhave to be responsible. You have to make sure you're an important part of the debate over these things. You want to learn from history where you can, obviously.
Google is an enormous company. Current and former employees complain that the bureaucracy and caution has slowed them down. All eight authors of the influential Transformers paper, which you cite in your letter, have left the company, with some saying Google moves too slow. Can you mitigate that and make Google more like a startup again?
Anytime you're scaling up a company, you have to make sure youre working to cut down bureaucracy and staying as lean and nimble as possible. There are many, many areas where we move very fast. Our growth in Cloud wouldn't have happened if we didnt scale up fast. I look at what the YouTube Shorts team has done, I look at what the Pixel team has done, I look at how much the search team has evolved with AI. There are many, many areas where we move fast.
Yet we hear those complaints, including from people who loved the company but left.
Obviously, when you're running a big company, there are times you look around and say, in some areas, maybe you didn't move as fastand you work hard to fix it. [Pichai raises his voice.] Do I recruit candidates who come and join us because they feel like they've been in some other large company, which is very, very bureaucratic, and they haven't been able to make change as fast? Absolutely. Are we attracting some of the best talent in the world every week? Yes. Its equally important to remember we have an open culturepeople speak a lot about the company. Yes, we lost some people. But we're also retaining people better than we have in a long, long time. Did OpenAI lose some people from the original team that worked on GPT? The answer is yes. You know, I've actually felt the company move faster in pockets than even what I remember 10 years ago.
Go here to read the rest:
Sundar Pichai on Google's AI, Microsoft's AI, OpenAI, and Did We ... - WIRED
This is Why and How Google Will Kill its Business Model – Medium
The Rara AVIS Reason Behind Their Intentions
If you cant beat it, join it.
Thats how the saying goes, and thats precisely what our dear friend Google is doing regarding AI.
But, weirdly enough, Google is taking it to the extreme, purposely contributing to the demise of its ad-base revenue model, one of the most successful businesses in the history of capitalism and the cash cow behind its trillion-dollar business.
However, you shouldnt fear for Googles integrity, this is the plan all along.
This article was originally published days ago in my free weekly newsletter, TheTechOasis.
If you want to be up-to-date with the frenetic world of AI while also feeling inspired to take action or, at the very least, to be well-prepared for the future ahead of us, this is for you.
Subscribe below to become an AI leader among your peers and receive content not present in any other platform, including Medium:
Anyone who follows the AI industry will probably agree on the fact that were approaching the death of Internet search as we know it.
And Google knows it too.
Undeniably, using ChatGPT or Claude is much quicker and more convenient than doing link-based searches.
Thus, Google had two options:
As Googles execs arent dumb, theyve naturally gone for the second option.
In fact, theres no company in the world right now more heavily focused on disrupting search with AI than, ironically, Google.
See the original post here:
This is Why and How Google Will Kill its Business Model - Medium