Microsoft’s Reasoning Algorithm Could Make AI Smarter – Lifewire

Mimicking the way humans think could make computers smarter, new research suggests.

Microsoft researchers have proposed a novel AI training technique named "Algorithm of Thoughts" (AoT), aimed at enhancing the efficiency and human-like reasoning capabilities of large language models (LLMs) such as ChatGPT. It's one of the ways researchers are trying to use human-like acumen to boost artificial intelligence (AI).

Machine learning algorithms are very strong at identifying correlation, but not necessarily causation," Alex Raymond, the head of AI at Doppl.ai, a generative AI company, told Lifewire in an email interview. "AI cannot explain its reasoning in the way that a human would. Humans have a more grounded and symbolic understanding of the world derived from both axiomatic wisdom and empirical learning."

Andriy Onufriyenko / Getty Images

Microsoft researchers assert in their new paper that the new algorithmic method could be revolutionary, as it "directs the language model towards a more efficient problem-solving trajectory," based on a released research paper. The technique employs "in-context learning," allowing the model to examine various solutions in a structured way.

The Algorithm of Thought technique gives LLMs the ability to efficiently search through the reasoning steps of solving a problem, Raymond said. It can allow models to imitate the behavior of classic programming algorithms by backtracking to a previously computed step and resuming from there. For example, imagine you ask an LLM to provide you a route between two points in a map, he suggested.

"A simple LLM query can have poor performance if the model hallucinates and tells you to go through a road it made up, or even if it starts to lose coherence after many steps," he said. "With AoT, the LLM can be trained to go through the problem-solving steps like a traditional pathfinding algorithm would, taking just the necessary backtracking steps to arrive at the destination efficiently. Think of it as a computer science student who is learning algorithms, writing out the steps by hand, and solving multiple examples."

A simple LLM query can have poor performance if the model hallucinates.

Through the chain of thoughts, humans decompose a problem into a chain of simple questions to help LLMs perform intermediate reasoning, Hong Zhou, the director of the Intelligent Services Group & AI R&D at Wiley, said via email.

"Since each sub-problem has multiple possible directions to explore, the tree of thoughts provides decision tree modalities to help LLMs explore a problem comprehensively," he added. "However, tree of thoughts requires multiple queries, while AoT only requires a single query to generate the entire thought process."

Andriy Onufriyenko / Getty Images

Despite their power, LLMs like ChatGPT still have a long way to go, Raymond noted. He said that more developments like AoT could come in the form of explainable AI.

"When AIs can explain their reasoning like a human would, they will allow us to learn and grow with them," he added. As these models grow in capacity, we will get to a point where hallucinations and errors will no longer be obvious to us if they don't expose their reasoning."

New algorithmic approaches such as AoT can improve the quality and production of LLMs, predicted Evan Macmillan, the CEO at Gridspace, said via email.

"LLM builders have already improved their models immensely with small amounts of human feedback," he added. "If LLMs can learn from more complex feedback and on-the-job work, we can expect even more impressive and efficient AI systems."

The new Microsoft approach comes after suggestions that AI has developed to reason like humans. In March, Microsoft's research team released a paper named "Sparks of Artificial General Intelligence: Early Experiments with GPT-4." The researchers assert in the document that GPT-4 displays indications of what is commonly termed "artificial general intelligence," or AGI.

Microsoft's AoT is "a step in the right direction," but human-like intelligence is still a long way away, Raghu Ravinutala, the CEO of the AI company yellow.ai, said in an email.

"Getting there will require significant progress in various areas of AI research to bridge the gap between current AI capabilities and human-level reasoning," he added. "A better way to describe the current state of human-like reasoning in LLMs is 'complex understanding."

Update 09/06/2023 - Corrected the attribution in paragraphs 3 & 5.

Thanks for letting us know!

Get the Latest Tech News Delivered Every Day

Tell us why!

Go here to see the original:

Microsoft's Reasoning Algorithm Could Make AI Smarter - Lifewire

Related Posts

Comments are closed.