Artificial intelligence, or AI models with lots of parameters, are very accurate and also use up a lot of energy. Luckily, there’s a way called analogue in-memory computing, or analogue AI, that can be more energy efficient by doing calculations on memory tiles. But analog AI hasn’t shown it can be as accurate as regular software on models that need many tiles and fast communication between them.
Now, in an interesting development published in Nature, IBM has showcased a groundbreaking achievement. It has successfully created analogue AI chips designed to tackle natural-language AI tasks while achieving a remarkable 14-fold improvement in energy efficiency.
Analogue AI Chips for Enhanced Energy Efficiency
Artificial Intelligence, or AI, has already started reshaping our lives and work, heralding a transformative era. However, a significant challenge arises, which is AI technology’s insatiable energy requirements. Astonishingly, large AI models can produce more emissions over their lifetimes compared to an average American car. With climate change concerns looming large, strides in AI energy efficiency are indispensable to counterbalance the expanding carbon footprint of Artificial Intelligence.

In a paper published in Nature, researchers from IBM Research divulged their remarkable feat. They have created prototype analogue AI chips specifically engineered for energy-efficient speech recognition and transcription tasks. These analogue chips demonstrated not only comparable reliability to their all-digital counterparts but also remarkable speed and energy efficiency.
While the notion of analogue chips for AI interference has been explored for some time, IBM’s recent accomplishment showcases how these chips can be harnessed for significant AI models prevalent in today’s landscape. Notably, their design enables the encoding of 36 million phase-change memory devices per chip, equivalent to models featuring up to 17 million parameters.
Also Read: AI Voice Scams: Don’t Fall Victim in 2023
What Is Analogue AI?
Analogue AI is a promising and innovative resolution capable of significantly cutting down on the time and energy expended on AI calculations. It represents a computing approach that employs analogue circuits in lieu of digital ones for executing computations.

Analogue circuits have the ability to encode information using continuous values like voltage or current as opposed to discrete binary digits (0s and 1s). This adaptability enables seamless integration into memory devices, eliminating the necessity of shuttling data between memory and the processing unit. This is a major contributor to the energy consumption of conventional digital computers.
This technology is perfectly suited for executing computations within neural networks, which form the cornerstone of numerous AI applications. These networks consist of layers of artificial neurons that handle inputs and generate outputs. Each neuron carries a weight that determines its impact on the output. In the realm of digital computers, these weights are stored in memory and must be transmitted to the processing unit for each computation.
Efficiency Through Innovative MAC Operations
IBM adopted a strategic approach, optimising Multiply-Accumulate (MAC) operations that underlie deep learning computations. By harnessing resistive Non-Volatile Memory (NVM) devices, the team achieved MAC operations within memory, sidestepping the need for data transfer between memory and computation sections. This innovation not only minimises energy waste but also accelerates computation through parallel processing, further magnifying the energy efficiency gains.

IBM researchers subjected their designs to rigorous testing through two notable experiments. The first experiment focused on keyword utterance detection, akin to how smart speakers recognise activation phrases. The analogue chip demonstrated equal accuracy, outpacing traditional software-based systems significantly due to its on-chip non-volatile memory.
The second experiment envisioned a future where generative AI systems leveraging analogue chips could replace digital ones. The team stitched five chips together to process a complex speech-to-text model, achieving transcriptions comparable to digital hardware setups while showcasing significant scalability.
IBM and Its AI Journey
IBM has been working on Artificial Intelligence for decades now, starting with its pioneering research in cognitive computing, machine learning, computer vision, and natural language processing. It has been part of groundbreaking innovations in AI integration in domains such as security, healthcare, education, and finance. Some of the initiatives taken by IBM in the field of AI include the IBM Research AI organisation, the IBM Watson OpenScale, and the IBM Trustworthy AI Framework.

This is not the first time IBM has delved into the concept of designing analogue chips for AI interference. In fact, researchers have been contemplating the idea for years now. Back in 2021, a team at IBM developed chips that utilised phase-change memory in an effort to encode the weights of a neural network directly onto the physical chip. However, previous research in the field could not get a definitive answer as to whether these chips could be used on the massive AI models dominating the landscape today.
Conclusion
IBM’s breakthrough opens a new avenue for analogue AI, which has potential beyond natural-language tasks. Their exploration extends to other areas, such as an energy-efficient mixed-signal architecture for computer vision recognition. This marks a crucial step towards sustainable AI and paves the way for further innovations in energy-efficient hardware.
Frequently Asked Questions
What Are Some of the Notable AI Projects That IBM Has Worked On?
Over the years, IBM has engaged in a number of artificial intelligence-driven projects, such as Project Debater, the IBM Analogue Hardware Acceleration Kit, Deep Blue, Watson, and the IBM Cloud Pak for Data with AutoSQL.
What Is a MAC Operation?
In digital signal processing, a MAC operation is a fundamental computing unit that happens when the product of two numbers is computed and added to an accumulator, part of the CPU that deals with arithmetic.
What Are Some of the Challenges and Limitations of Analogue AI?
Analogue AI faces a lot of challenges, such as precision, variability, and noise. In this context, noise refers to the unwanted fluctuations of disturbances in the data or signals. Precision refers to the accuracy of the resolution of computations. All of these factors can affect the quality and robustness of analogue AI systems.
Author Profile

Latest entries
AI2023.11.30Singapore Invests in AI Training for Professionals to Empower Healthcare
CRYPTO2023.11.305 Takeaways From America’s Crypto Industry Biggest Crackdown in History
GAMING2023.11.30Does Persona 5 Tactica Take Place After Strikers?
AI2023.11.30AI in Pancreatic Cancer Detection: Will AI Doctors Aids Be Real?