In Adelaide they’re trying to build a deep learning machine that can reason – Cosmos

Large Language Models burst onto the scene a little over a year ago and transformed everything, and yet its already facing a fork in the road.more of the same or does it venture into what is being called deep learning?

Professor Simon Lucey, the Director of the Adelaide-based Australian Institute for Machine Learning believes that path will lead to augmented reasoning.

Its a new and emerging field of AI that combines the ability of computers to recognise patterns through traditional machine learning, with the ability to reason and learn from prior information and human interaction.

Machines are great at sorting. Machines are great at deciding. Theyre just bad at putting the two together.

Part of the problem lies in teaching a machine something we dont fully understand ourselves: Intelligence.

What is it?

Is it a vast library of knowledge?

Is it extracting clues and patterns from the clutter?

Is it common sense or cold-hard rationality?

Machines are great at sorting. Machines are great at deciding. Theyre just bad at putting the two together.

The Australian Institute for Machine Learnings Professor Simon Lucey says its all these things and much more. And thats why artificial intelligence (AI) desperately needs the ability to reason out what best applies where, when, why and how.

Some people regard modern machine learning as glorified lookup tables, right? Its essentially a process of if Ive got this, then that.

The amazing thing, Lucey adds, is that raw processing power and big-data deep learning have managed to scale up to the level needed to mimic some types of intelligent behaviour.

Its proven this can actually work for a lot of problems, and work really well.

But not all problems.

Were seeing the emergence of a huge amount of low-risk AI and computer vision, Lucey says. But high-risk AI say looking for rare cancers, driving on a city street, flying a combat drone isnt yet up to scratch.

Existing big data and big computing techniques rely on finding the closest possible related example. But gaps in those examples represent a trap.

Theres all these scenarios where we are coming up against issues where rote memorisation doesnt equate to reasoning, Lucey explains.

The human brain has been called an average machine. Or an expectation generator.

Thats why we make so many mistakes while generally muddling our way through life.

But its a byproduct of the way the networks of neurons in our brains configure themselves in paths based on experience and learning.

This produces mental shortcuts. Expectation biases. And these help balance effectiveness with efficiency in our brains.

Intelligence isnt only about getting the right answer, says Lucey. Its getting the right answer in a timely fashion.

For example, humans are genetically programmed to respond reflexively to the sight of a lion, bear or spider.

Intelligence isnt only about getting the right answer. Its getting the right answer in a timely fashion.

You arent going to think and reason, he explains. Youre going to react. Youre going to get the hell out of there!

But evolution can lead to these mental shortcuts working too well.

We can find ourselves jumping at shadows.

Which is fine, right? says Lucey. Because if I make a mistake, its okay I just end up feeling a bit silly. But if Im right, Ill stay alive! Act quick, think slow.

Machine intelligence is very good at doing quick things like detecting a face.

But its that broader reasoning task realising if you were right or wrong where theres still a lot of work that needs to be done.

Biological entities like humans dont need nearly as much data as AI to learn from, says Lucey. They are much more data-efficient learners.

This is why a new approach is needed for machine learning.

People decades ago realised that some tasks can be programmed into machines step by step like when humans bake a cake, says Lucey. But there are other tasks that require experience. If Im going to teach my son how to catch and throw a ball, Im not going to hand him an instruction book!

Machines, however, can memorise enormous instruction books. And they can also bundle many sets of experiences into an algorithm. Machine learning enables computers to program themselves by example instead of relying on direct coding by humans.

How do I produce the rules behind an experience? How can I train AI to cope with the unexpected?

But its an outcome still limited by rigid programmed thinking.

These classical if-this-then-that rule sets can be very brittle, says Lucey. So how do I produce the rules behind an experience? How can I train AI to cope with the unexpected?

This needs context.

For example, research has shown babies figure out the concept of object permanence that something still exists when it moves out of sight between four and seven months of age.

And that helps the baby to move on to extrapolate cause and effect.

With machines, every time the ball moves or bounces in a way not covered by its set of rules it breaks down, says Lucey. But my kid can adapt and learn.

Its a problem facing autonomous cars.

Can we push every possible experience of driving through a city into an algorithm to teach it what to expect? Or can it instead learn relevant rules of behaviour instead, and rationalise which applies when?

Albert Einstein said: True education is about teaching how to think, not what to think.

Lucey equates this with the need for reasoning.

What Im talking about when it comes to reasoning, I guess, is that we all have these knee-jerk reactions over what should or should not happen. And this feeds up to a higher level of the brain for a decision.

We dont know how to do that for machines at the moment.

The problem with current machine learning is its only as good as the experiences its been exposed to.

Its about turning experience into knowledge. And being aware of that knowledge.

The problem with current machine learning is its only as good as the experiences its been exposed to, he says. And we have to keep shoving more and more experiences at it for it to identify something new.

An autonomous car is very good at its various sub-tasks. It can instantly categorise objects in video feeds. It can calculate distances and trajectories from sensors like LiDAR. And it can match these extremely quickly with its bible of programmed experiences.

Its working out how to connect these different senses to produce a generalisation beyond the moment that AI still struggles with, Lucey explains.

The AIML is exploring potential solutions through simulating neural networks the interconnected patterns of cells found in our brains.

In the world of AI, thats called Deep Learning.

Neural networks dont follow a set of rigid if this, then that instructions.

Instead, the process balances the weight of what it perceives to guide it through what is essentially a wiring diagram. Experience wears trails into this diagram. But it also adds potential alternative paths.

These pieces are all connected but have their own implicit bias, says Lucey. They give the machine a suite of solutions, and the ability to prefer one solution over another.

Its still early days. Weve still got a lot to learn about deep learning.

Neural network algorithms are great for quick reflex actions like recognising a face, he adds. But its the broader reasoning task like does that reflex fit the context of everything else going on around it where theres still a lot of work that needs to be done.

The AIML has a Centre for Augmented Reasoning.

The reasoning were trying to explore is the ability for a machine to go beyond what its been trained upon.

I think the big opportunities in AI over the next couple of decades is around creating data-efficient learning for a system that can reason, Lucey explains.

And the various AIML research teams are already chalking up wins.

Weve successfully applied that approach to the autonomous car industry. Weve also had a lot of success in other areas, such as recognising the geometry, shape and properties of new objects.

That is helping give machines a sense of object permanence. And that, in turn, is leading to solutions like AI-generated motion video that looks real.

The motive behind it all is to give AI the ability to extrapolate cause and effect.

The reasoning were trying to explore is the ability for a machine to go beyond what its been trained upon, says Lucey. Thats something very special to humans that machines still struggle with.

Originally posted here:
In Adelaide they're trying to build a deep learning machine that can reason - Cosmos

Related Posts

Comments are closed.