Google DeepMind Researcher Discusses Emergence of Reasoning in AI at Harvard ML Foundations Talk | News – Harvard Crimson

Google researcher Denny Zhou discussed the emergence of reasoning in large language models at a Harvard Machine Learning Foundations talk Friday afternoon at the Science and Engineering Complex.

Zhou, the founder and lead of the reasoning team for Google DeepMind, outlined the way he trains emerging artificial intelligence LLMs to shrink the gap between machine and human intelligence.

The lecture, titled Teach Language Models to Reason, is one of several seminars hosted by the Harvard Machine Learning Foundations Group, comprised of faculty, graduate students, and postdoctoral fellows at the University who research the topic of machine learning.

Zhou talked about his approach to investigating reasoning in AI technology, which he started five years ago.

The first thing I tried was to combine deep-learning models with, first of all, a lot of neurological machines, he said.

The approach is composed of four elements: chain-of-thought, or adding thoughts before a final answer; self-consistency, sampling repeatedly and selecting the most frequent answer; least-to-most, breaking down problems into different parts and solving them individually; and instruction finetuning, calibrating an AI to assess new problems without training.

Though good ideas have really amazing performance, Zhou said AI still has a long way to go in comparison to human thinking.

Zhou said he is unconvinced that AIs integration into modern society will live up to its expectations. Though some might say that super intelligence will emerge in five or 10 years, Zhou said, I just want to see a self-driving car coming in 10 years, and I cannot imagine that in this moment.

In a post-talk interview, Zhou elaborated that the AI technology behind self-driving cars would be very difficult to scale up because AI data needed for training models are specific to each city, so training models would need to collect data from different cities.

Human intelligence, Zhou said, still supersedes AI capabilities.

Humans are humans. If you know how to drive cars in one city, you have no problem to drive cars in other cities, he said. That is very different from the kinds of techniques used to do self-driving cars.

Zhou shared his hopes for the development of LLMs with reasoning capabilities and their contributions to human society.

I expect lots of AI models will greatly improve our experience of using different softwares, he said.

He cited ChatGPTs ability to write better text and larger models capacity to help write code.

Larger models will make our world more productive, Zhou said.

Staff writer Camilla J. Martinez can be reached at camilla.martinez@thecrimson.com.

Staff writer Tiffani A. Mezitis can be reached at tiffani.mezitis@thecrimson.com.

Here is the original post:
Google DeepMind Researcher Discusses Emergence of Reasoning in AI at Harvard ML Foundations Talk | News - Harvard Crimson

Related Posts

Comments are closed.