Category Archives: Alphago

AI on steroids: Much bigger neural nets to come with new hardware, say Bengio, Hinton, and LeCun – ZDNet

Geoffrey Hinton, center. talks about what future deep learning neural nets may look like, flanked by Yann LeCun of Facebook, right, and Yoshua Bengio of Montreal's MILA institute for AI, during a press conference at the 34th annual AAAI conference on artificial intelligence.

The rise of dedicated chips and systems for artificial intelligence will "make possible a lot of stuff that's not possible now," said Geoffrey Hinton, the University of Toronto professor who is one of the godfathers of the "deep learning" school of artificial intelligence, during a press conference on Monday.

Hinton joined his compatriots, Yann LeCun of Facebook and Yoshua Bengio of Canada's MILA institute, fellow deep learning pioneers, in an upstairs meeting room of the Hilton Hotel on the sidelines of the 34th annual conference on AI by the Association for the Advancement of Artificial Intelligence. They spoke for 45 minutes to a small group of reporters on a variety of topics, including AI ethics and what "common sense" might mean in AI. The night before, all three had presented their latest research directions.

Regarding hardware, Hinton went into an extended explanation of the technical aspects that constrain today's neural networks. The weights of a neural network, for example, have to be used hundreds of times, he pointed out, making frequent, temporary updates to the weights. He said the fact graphics processing units (GPUs) have limited memory for weights and have to constantly store and retrieve them in external DRAM is a limiting factor.

Much larger on-chip memory capacity "will help with things like Transformer, for soft attention," said Hinton, referring to the wildly popular autoregressive neural network developed at Google in 2017. Transformers, which use "key/value" pairs to store and retrieve from memory, could be much larger with a chip that has substantial embedded memory, he said.

Also: Deep learning godfathers Bengio, Hinton, and LeCun say the field can fix its flaws

LeCun and Bengio agreed, with LeCun noting that GPUs "force us to do batching," where data samples are combined in groups as they pass through a neural network, "which isn't efficient." Another problem is that GPUs assume neural networks are built out of matrix products, which forces constraints on the kind of transformations scientists can build into such networks.

"Also sparse computation, which isn't convenient to run on GPUs ...," said Bengio, referring to instances where most of the data, such as pixel values, may be empty, with only a few significant bits to work on.

LeCun predicted they new hardware would lead to "much bigger neural nets with sparse activations," and he and Bengio both emphasized there is an interest in doing the same amount of work with less energy. LeCun defended AI against claims it is an energy hog, however. "This idea that AI is eating the atmosphere, it's just wrong," he said. "I mean, just compare it to something like raising cows," he continued. "The energy consumed by Facebook annually for each Facebook user is 1,500-watt hours," he said. Not a lot, in his view, compared to other energy-hogging technologies.

The biggest problem with hardware, mused LeCun, is that on the training side of things, it is a duopoly between Nvidia, for GPUs, and Google's Tensor Processing Unit (TPU), repeating a point he had made last year at the International Solid-State Circuits Conference.

Even more interesting than hardware for training, LeCun said, is hardware design for inference. "You now want to run on an augmented reality device, say, and you need a chip that consumes milliwatts of power and runs for an entire day on a battery." LeCun reiterated a statement made a year ago that Facebook is working on various internal hardware projects for AI, including for inference, but he declined to go into details.

Also: Facebook's Yann LeCun says 'internal activity' proceeds on AI chips

Today's neural networks are tiny, Hinton noted, with really big ones having perhaps just ten billion parameters. Progress on hardware might advance AI just by making much bigger nets with an order of magnitude more weights. "There are one trillion synapses in a cubic centimeter of the brain," he noted. "If there is such a thing as General AI, it would probably require one trillion synapses."

As for what "common sense" might look like in a machine, nobody really knows, Bengio maintained. Hinton complained people keep moving the goalposts, such as with natural language models. "We finally did it, and then they said it's not really understanding, and can you figure out the pronoun references in the Winograd Schema Challenge," a question-answering task used a computer language benchmark. "Now we are doing pretty well at that, and they want to find something else" to judge machine learning he said. "It's like trying to argue with a religious person, there's no way you can win."

But, one reporter asked, what's concerning to the public is not so much the lack of evidence of human understanding, but evidence that machines are operating in alien ways, such as the "adversarial examples." Hinton replied that adversarial examples show the behavior of classifiers is not quite right yet. "Although we are able to classify things correctly, the networks are doing it absolutely for the wrong reasons," he said. "Adversarial examples show us that machines are doing things in ways that are different from us."

LeCun pointed out animals can also be fooled just like machines. "You can design a test so it would be right for a human, but it wouldn't work for this other creature," he mused. Hinton concurred, observing "house cats have this same limitation."

Also: LeCun, Hinton, Bengio: AI conspirators awarded prestigious Turing prize

"You have a cat lying on a staircase, and if you bounce a soccer ball down the stairs toward a care, the cat will just sort of watch the ball bounce until it hits the cat in the face."

Another thing that could prove a giant advance for AI, all three agreed, is robotics. "We are at the beginning of a revolution," said Hinton. "It's going to be a big deal" to many applications such as vision. Rather than analyzing the entire contents of a static image or video frame, a robot creates a new "model of perception," he said.

"You're going to look somewhere, and then look somewhere else, so it now becomes a sequential process that involves acts of attention," he explained.

Hinton predicted last year's work by OpenAI in manipulating a Rubik's cube was a watershed moment for robotics, or, rather, an "AlphaGo moment," as he put it, referring to DeepMind's Go computer.

LeCun concurred, saying that Facebook is running AI projects not because Facebook has an extreme interest in robotics, per se, but because it is seen as an "important substrate for advances in AI research."

It wasn't all gee-whiz, the three scientists offered skepticism on some points. While most research in deep learning that matters is done out in the open, some companies boast of AI while keeping the details a secret.

"It's hidden because it's making it seem important," said Bengio, when in fact, a lot of work in the depths of companies may not be groundbreaking. "Sometimes companies make it look a lot more sophisticated than it is."

Bengio continued his role among the three of being much more outspoken on societal issues of AI, such as building ethical systems.

When LeCun was asked about the use of factual recognition algorithms, he noted technology can be used for good and bad purposes, and that a lot depends on the democratic institutions of society. But Bengio pushed back slightly, saying, "What Yann is saying is clearly true, but prominent scientists have a responsibility to speak out." LeCun mused that it's not the job of science to "decide for society," prompting Bengio to respond, "I'm not saying decide, I'm saying we should weigh in because governments in some countries are open to that involvement."

Hinton, who frequently punctuates things with a humorous aside, noted toward the end of the gathering his biggest mistake with respect to Nvidia. "I made a big mistake back in with Nvidia," he said. "In 2009, I told an audience of 1,000 grad students they should go and buy Nvidia GPUs to speed up their neural nets. I called Nvidia and said I just recommended your GPUs to 1,000 researchers, can you give me a free one, and they said no.

"What I should have done, if I was really smart, was take all my savings and put it into Nvidia stock. The stock was at $20 then, now it's, like, $250."

Read more here:
AI on steroids: Much bigger neural nets to come with new hardware, say Bengio, Hinton, and LeCun - ZDNet

So Is an AI Winter Really Coming This Time? – Walter Bradley Center for Natural and Artificial Intelligence

AI has fallen from glorious summers into dismal winters before. The temptation to predict another such tumble recurs naturally. So that is the question the BBC posed to AI researchers: Are we on the cusp of an AI winter:

The 10s were arguably the hottest AI summer on record with tech giants repeatedly touting AIs abilities.

AI pioneer Yoshua Bengio, sometimes called one of the godfathers of AI, told the BBC that AIs abilities were somewhat overhyped in the 10s by certain companies with an interest in doing so.

There are signs, however, that the hype might be about to start cooling off.

I keep up with this kind of thing. The answer is: Yes, and no. AI did surge past milestones during the 2010s that it had not been expected to cross for many more years:

2011 IBMs Watson wins at Jeopardy! IBM Watson: The inside story of how the Jeopardy-winning supercomputer was born, and what it wants to do next (Tech Republic, September 9, 2013)

2012 Google unveils a deep learning systems that recognized images of cats

2015 Image recognition systems outperformed humans in the ImageNet challenge

2016 AlphaGo defeats world Go champion Lee Sedol: In Two Moves, AlphaGo and Lee Sedol Redefined the Future (Wired, March 16, 2016)

2018 Self-driving cars hit the road as Googles Waymo launched (a very limited) self-driving taxi service in Phoenix, Arizona

But other headlines during the period have been less heeded:

Despite High Hopes, Self-Driving Cars Are Way in the Future (2019)

The Next Hot Job: Pretending to Be a Robot (2019)

Boeings Sidelined Fuselage Robots: What Went Wrong? (2019)

Self-driving cars: Hype-filled decade ends on sobering note (2019)

Tesla driver killed in crash with Autopilot active, NHTSA investigating (2016)

Dont fall for these 3 myths about AI, machine learning (2018)

A Sobering Message About the Future at AIs Biggest Party (2019)

And so on.

So which is it? AI Winter or Robot Overlords? I suggest neither. And so do active researchers.

Gary Marcus, an AI researcher at New York University, said: By the end of the decade there was a growing realisation that current techniques can only carry us so far.

He thinks the industry needs some real innovation to go further.

There is a general feeling of plateau, said Verena Rieser, a professor in conversational AI at Edinburgh[s Heriot Watt University.

One AI researcher who wishes to remain anonymous said were entering a period where we are especially sceptical about AGI.

Recent AI developments, notably those lumped under the rubric of Deep Learning have advanced the state-of-the-art in machine learning. Lets not forget that prior efforts, such as the poorly named Expert Systems, had faded because, well, they werent expert at all. Deep Learning systems, as highly flexible pattern matchers, will endure.

What is not coming is the long-predicted AI Overlord, or anything that is even close to surpassing human intelligence. Like any other tool we build, AI has its place when it amplifies and augments our abilities.

Just as tractors and diggers have not led to legions of people who no longer use their arms, the latest advances in AI will not lead to human serfs cowering before beneath an all-intelligent machine. If anything, AI will require more from us, not less, because how we choose to use these tools will make an increasingly stark difference between benefit and ruin.

As Samin Winiger, a former AI research at Google says, What we called AI or machine learning during the past 10-20 years, will be seen as just yet another form of computation

Machines are tool in the toolbox, not a replacement for minds. An AI winter would only be coming if we forgot that.

Here are some of Brendan Dixons earlier musings on the concept of an AI Winter:

Just a light frost? Or an AI winter? Its nice to be right once in a whilecheck out the evidence for yourself

and

AI WinterIs Coming:Roughly every decade since the late 1960s has experienced a promising wave of AI that later crashed on real-world problems, leading to collapses in research funding.

Follow this link:
So Is an AI Winter Really Coming This Time? - Walter Bradley Center for Natural and Artificial Intelligence