AI Could Save the World, If It Doesn’t Ruin the Environment First – Medium

As AI usage grows, its energy consumption and carbon emissions are becoming an environmental concern. Heres why and how we can find solutions.

By Ben Dickson

When Mohammad Haft-Javaherian, a student at the Massachusetts Institute of Technology, attended MITs Green AI Hackathon in January, it was out of curiosity to learn about the capabilities of a new supercomputer cluster being showcased at the event. But what he had planned as a one-hour exploration of a cool new server drew him into a three-day competition to create energy-efficient artificial-intelligence programs.

The experience resulted in a revelation for Haft-Javaherian, who researches the use of AI in healthcare: The clusters I use every day to build models with the goal of improving healthcare have carbon footprints, Haft-Javaherian says.

The processors used in the development of artificial intelligence algorithms consume a lot of electricity. And in the past few years, as AI usage has grown, its energy consumption and carbon emissions have become an environmental concern.

I changed my plan and stayed for the whole hackathon to work on my project with a different objective: to improve my models in terms of energy consumption and efficiency, says Haft-Javaherian, who walked away with a $1,000 prize from the hackathon. He now considers carbon emission an important factor when developing new AI systems.

But unlike Haft-Javaherian, many developers and researchers overlook or remain oblivious to the environmental costs of their AI projects. In the age of cloud-computing services, developers can rent online servers with dozens of CPUs and strong graphics processors (GPUs) in a matter of minutes and quickly develop powerful artificial intelligence models. And as their computational needs rise, they can add more processors and GPUs with a few clicks (as long as they can foot the bill), not knowing that with every added processor, theyre contributing to the pollution of our green planet.

The recent surge in AIs power consumption is largely caused by the rise in popularity of deep learning, a branch of artificial-intelligence algorithms that depends on processing vast amounts of data. Modern machine-learning algorithms use deep neural networks, which are very large mathematical models with hundreds of millions-or even billions-of parameters, says Kate Saenko, associate professor at the Department of Computer Science at Boston University and director of the Computer Vision and Learning Group.

These many parameters enable neural networks to solve complicated problems such as classifying images, recognizing faces and voices, and generating coherent and convincing text. But before they can perform these tasks with optimal accuracy, neural networks need to undergo training, which involves tuning their parameters by performing complicated calculations on huge numbers of examples.

To make matters worse, the network does not learn immediately after seeing the training examples once; it must be shown examples many times before its parameters become good enough to achieve optimal accuracy, Saenko says.

All this computation requires a lot of electricity. According to a study by researchers at the University of Massachusetts, Amherst, the electricity consumed during the training of a transformer, a type of deep-learning algorithm, can emit more than 626,000 pounds of carbon dioxide-nearly five times the emissions of an average American car. Another study found that AlphaZero, Googles Go- and chess-playing AI system, generated 192,000 pounds of CO2 during training.

To be fair, not all AI systems are this costly. Transformers are used in a fraction of deep-learning models, mostly in advanced natural-language processing systems such as OpenAIs GPT-2 and BERT, which was recently integrated into Googles search engine. And few AI labs have the financial resources to develop and train expensive AI models such as AlphaZero.

Also, after a deep-learning model is trained, using it requires much less power. For a trained network to make predictions, it needs to look at the input data only once, and it is only one example rather than a whole large database. So inference is much cheaper to do computationally, Saenko says.

Many deep-learning models can be deployed on smaller devices after being trained on large servers. Many applications of edge AI now run on mobile devices, drones, laptops, and IoT (Internet of Things) devices. But even small deep-learning models consume a lot of energy compared with other software. And given the expansion of deep-learning applications, the cumulative costs of the compute resources being allocated to training neural networks are developing into a problem.

Were only starting to appreciate how energy-intensive current AI techniques are. If you consider how rapidly AI is growing, you can see that were heading in an unsustainable direction, says John Cohn, a research scientist with IBM who co-led the Green AI hackathon at MIT.

According to one estimate, by 2030, more than 6 percent of the worlds energy may be consumed by data centers. I dont think it will come to that, though I do think exercises like our hackathon show how creative developers can be when given feedback about the choices theyre making. Their solutions will be far more efficient, Cohn says.

Continue reading here:
AI Could Save the World, If It Doesn't Ruin the Environment First - Medium

Related Posts

Comments are closed.