The voracious appetite for energy from the world’s computers and communications technologies presents a clear threat to the global warming of the planet. This is the direct assessment of the presenters of the intensive two-day workshop on the climate implications of computing and communications held on March 3-4, organized by the Climate and Sustainability Consortium ( MCSC) from MIT, the MIT-IBM Watson AI Lab and the Schwarzman College of Computer Science.
The virtual event featured rich discussions and highlighted opportunities for collaboration between an interdisciplinary group of MIT professors and researchers and industry leaders across multiple sectors, highlighting the power of academia and industry.
“If we continue with the current trajectory of compute energy, by 2040 we are expected to reach global power generation capacity. The increase in compute energy and demand has increased to a much faster pace than the increase in global power generation capacity,” said Bilge Yildiz, Breene M. Kerr Professor in the Departments of Nuclear Science and Engineering and Materials Science and Engineering at MIT, one of 18 presenters This computational energy projection is based on the Semiconductor Research Corporations ten-year report.
To cite just one example: information and communication technologies already represent more than 2% of the world’s energy demand, which is comparable to the emissions of the aeronautical industry linked to fuel.
“We are the very beginning of this data-driven world. We really need to start thinking about it and acting now,” said presenter Evgeni Gousev, senior director at Qualcomm.
Innovative energy efficiency options
To that end, workshop presentations explored a host of power-efficiency options, including specialized chip designs, data center architecture, better algorithms, hardware modifications, and changes in the consumer behavior. Industry leaders from AMD, Ericsson, Google, IBM, iRobot, NVIDIA, Qualcomm, Tertill, Texas Instruments and Verizon showcased energy saving programs from their companies, while experts from across MIT provided insight into ongoing research that could enable more efficient computing.
Panel topics ranged from “Custom Hardware for Efficient Computing” to “Hardware for New Architectures” to “Algorithms for Efficient Computing”, among others.
Visual representation of the conversation during the workshop session entitled “Energy Efficient Systems”.
Image: Haley McDevitt
The goal, Yildiz said, is to improve energy efficiency associated with computing by more than a million times.
“I think part of the answer to how we make computing much more sustainable has to do with specialized architectures that have a very high level of utilization,” said Darío Gil, senior vice president and IBM’s research director, who stressed that solutions should be as “elegant” as possible.
For example, Gil illustrated an innovative chip design that uses vertical stacking to reduce the distance data has to travel, and thus reduce power consumption. Surprisingly, more efficient use of tape – a traditional medium for primary data storage – combined with specialized hard disk drives (HDDs), can yield significant savings in carbon dioxide emissions.
Gil and presenters Bill Dally, chief scientist and senior vice president of research at NVIDIA; Ahmad Bahai, technical director of Texas Instruments; and others focused on storage. Gil compared the data to a floating iceberg in which we can have quick access to the “warm data” of the smallest visible part while the “cold data”, the large underwater mass, represents data that tolerates latency higher. Think digital photo storage, Gil said. “Honestly, do you really collect all these photographs continuously?” Storage systems should provide an optimized mix of hard drives for hot data and tape for cold data based on data access patterns.
Bahai highlighted the significant power savings achieved through segmented standby and full processing. “We have to learn not to do anything better,” he said. Dally talked about mimicking the way our brain wakes up from deep sleep: “We can wake up [computers] much faster, so we don’t need to run them full throttle.
Several workshop presenters talked about emphasizing “parsimony,” a matrix in which most elements are zero, as a way to improve the efficiency of neural networks. Or, as Dally put it, “Never put off until tomorrow, where you could put off forever,” explaining efficiency isn’t “getting the most information with the fewest bits. It’s do the most with the least energy.
Holistic and multidisciplinary approaches
“We need both efficient algorithms and efficient hardware, and sometimes we need to co-design both the algorithm and the hardware for efficient computation,” said Song Han, panel moderator and assistant professor at the Department of Electrical Engineering and Computer Science (EECS) at MIT.
Some presenters were optimistic about the innovations already underway. According to Ericsson research, up to 15% of global carbon emissions can be reduced through the use of existing solutions, noted Mats Pellbäck Scharp, Head of Sustainability at Ericsson. For example, GPUs are more efficient than CPUs for AI, and the progression from 3G networks to 5G networks increases energy savings.
“5G is the most power-efficient standard out there,” Scharp said. “We can build 5G without increasing energy consumption.”
Companies such as Google are optimizing the energy consumption of their data centers through improved design, technology and renewable energy. “Five of our data centers around the world are Operating near or above 90% carbon-free energy,” said Jeff Dean, senior fellow at Google and senior vice president of Google Research.
Still, pointing to the possible slowing of transistor doubling in an integrated circuit – or Moore’s Law – “We need new approaches to meet this computational demand,” said Sam Naffziger, senior vice president of AMD, member of the company and product technology architect. Naffziger spoke of the “overkill” of performance. For example, “we find that in the gaming and machine learning space, we can use lower-precision math to provide as good a picture with 16-bit computations as with 32-bit computations, and instead of the old 32b math to train AI networks, we can use low energy 8b or 16b calculations.
Visual representation of the conversation during the workshop session titled “Wireless, Networked, and Distributed Systems”.
Image: Haley McDevitt
Other presenters identified edge computing as a big power consumer.
“We also need to change the devices that get into the hands of our customers,” said Heidi Hemmer, senior vice president of engineering at Verizon. When thinking about how we use energy, it’s common to jump to data centers – but it really starts with the device itself and the energy the devices use. Then we can think of home web routers, distributed networks, data centers and hubs. “The devices are actually the least energy efficient,” Hemmer concluded.
Some presenters had different points of view. Many have called for the development of dedicated silicon chipsets for greater efficiency. However, panel moderator Muriel Medard, Professor Cecil H. Green at EECS, described research conducted at MIT, Boston University and Maynooth University on the GRAND chip (Guessing Random Additive Noise Decoding), saying “rather than having outdated chips as new codes come in and in different standards, you can use one chip for all codes.
Regardless of the chip or the new algorithm, Helen Greiner, CEO of Tertill (a weeding robot) and co-founder of iRobot, pointed out that to bring products to market, “We have to learn to stop wanting to get the best and the most recent. , the most advanced processor which is usually the most expensive. She added, “I like to say robot demos cost a dime a dozen, but robotic products are very rare.”
Greiner pointed out that consumers can play a role in promoting more energy-efficient products, just as drivers have begun to demand electric cars.
Dean also sees an environmental role for the end user.
“We’ve allowed our cloud customers to select the cloud region in which they want to run their compute, and they can decide how important it is to have a low carbon footprint,” he said, also quoting others. interfaces that could allow consumers to decide. which airline flights are most efficient or what impact installing a solar panel on their home would have.
However, Scharp said, “Extending the life of your smartphone or tablet is really the best climate action you can take if you want to reduce your digital carbon footprint.”
Faced with growing demands
Despite their optimism, the presenters acknowledged that the world is facing a growing demand for computation from machine learning, AI, games and especially blockchain. Panel moderator Vivienne Sze, associate professor at EECS, noted the conundrum.
“We can do a great job of making IT and communication really efficient. But there is this tendency that once things are very efficient, people use more of them, and that could lead to an overall increase in the use of these technologies, which will then increase our overall carbon footprint,” said Sze.
Presenters saw great potential in university/industry partnerships, particularly in research efforts on the university side. “By combining these two forces together, you can really amplify the impact,” Gousev concluded.
Presenters at the Climate Implications of Computing and Communications workshop also included: Joel Emer, Professor of Practice in EECS at MIT; David Perreault, Joseph F. and Nancy P. Keithley Professor of EECS at MIT; Jesús del Alamo, MIT Donner Professor and Professor of Electrical Engineering in EECS at MIT; Heike Riel, IBM Fellow and Head of Science and Technology at IBM; and Takashi Ando, senior research staff member at IBM Research. Recorded workshop sessions are available at Youtube.