The Center for Responsible Machine Learning and the Institute for Energy Efficiency at UC Santa Barbara are advancing the state of the art in artificial intelligence, machine learning, natural language processing, and computer vision while comprehending important societal impacts of AI, particularly driving improved energy efficiency in both AI training and inference.
The first step to improving energy efficiency in AI is to benchmark current state-of-the-art AI models. To tackle such challenges, Prof. William Wang from CRML and his students proposed the HULK platform for energy efficiency benchmarking in natural language processing research. A typical NLP pipeline includes pretraining, fine-tuning and inference phases. The platform compares the end-to-end training and inference efficiency from the perspectives of time and budget. Different from previous leaderboards, in order to compare the general training and inference efficiency of pretrained models on different tasks, the platform offers a combined score on 3 different classic NLP tasks including natural language inference, sentiment analysis and named entity recognition. The HULK platforms offers a practical reference for efficient model and hardware selection, especially for enterprise training their own AI models with private data.
AI is becoming widely available on all manner of devices. AI applications on such devices are demanding more and more computing resources and data storage. Therefore, slight improvements in AI efficiency could make a huge impact as these solutions scale worldwide. Green AI research in IEE and CRML will contribute to much more energy efficient and practical AI solutions on millions of devices worldwide.
Grand Challenge
1000X more efficient AI/machine learning training and processing.
Training AI/machine learning models, an activity that involves processing vast amounts of data, is an energy-intensive process. One recent estimate in the literature suggested that training a single typical AI/ML model creates a carbon dioxide (CO2) footprint of 626,000 pounds, or five times the lifetime emissions of the average American car. That same paper asserted that using a state-of-the-art language model for natural language processing equals the CO2 emissions of one human for 30 years. Both findings provide a jarring quantification of AI’s environmental impact.
IEE Machine Learning researchers have demonstrated significant early results in improving the energy efficiency of training AIs via an algorithmic and systems approach.
IEE’s researchers are also designing and fabricating hardware similar to the human brain, for solving some of the hardest AI and optimization problems. This effort employs mixed-signal neuromorphic circuits with integrated metal-oxide memristors. Such circuits enable very dense, fast, and energy-efficient implementation of the most common operations in bio-inspired optimization algorithms. The preliminary results from this group show that the proposed hardware implementation is estimated to be 70 times faster and 20,000 times more energy efficient compared to the most efficient conventional approach
Research Highlights
Green AI and the UCSB Center for Responsible Machine Learning (CRML) Director William Wang - The Center for Responsible Machine Learning ties cutting-edge research in AI with important societal impacts. In their IEE related research, Professor Wang and colleagues in CRML are applying AI and Machine Learning to helping solve societal energy efficiency problems and are using an algorithmic and systems approach to making AI/ML training and inference dramatically more energy efficient.