How is this team at IISc building next-generation analog chipsets for AI applications?

0

Deep neural networks (DNNs) have increased in size and complexity. This has made it difficult for traditional digital processors to provide the required performance with low power consumption and sufficient memory resources. For these reasons, analog data processing is proving to be more attractive compared to digital data processing. Analog computing techniques achieve high computational density and energy efficiency compared to an equivalent digital implementation.

Based on this, researchers at IISC Bangalore published a paper describing a novel design framework that can help build next-generation analog computer chipsets that are faster and use less power than the digital chips used in most electronic devices are to be found.

“We have developed a novel analog computing paradigm called Shape-based Analog Computing that achieves the desired functional shape using the inherent device physics of the transistors using universal conservation principles. Using this framework, end users can create a modular analog architecture just like digital design while maintaining the area and power efficiency of analog,” said Pratik Kumar, PhD student at IISc Bangalore and one of the study’s authors. That design framework was developed as part of Kumar’s PhD work.

The research team built a prototype analog chipset called ARYABHAT-1 (Analog Reconfigurable Technology And Bias-Scalable Hardware for AI Tasks) using the framework. The chipset can be used for AI-based applications such as object or speech recognition, or those that require massive parallel computing operations at high speeds.

Photo credits: NeuRonICS Lab, DESE, IISc

The research was carried out by Dr. Chetan Singh Thakur, Assistant Professor, Department of Electronic Systems Engineering (DESE) in collaboration with Shantanu Chakrabartty, Professor at the McKelvey School of Engineering at Washington University in St. Louis. Ankita Nandi, Prime Minister’s Research Fellow, working with Dr. Thakur, who works at IISc Bangalore’s NeuRonICS Lab, was also involved in the research.

In an email conversation with Analytics India Magazine, Pratik Kumar spoke about the team’s work, inspiration and future prospects.

Powered by the power of the human brain

The research work began in 2019. The researchers mention that they were fascinated by how powerful and energy-efficient the human brain is. With around 86 billion computing units (neurons) and a power consumption of only around 25 watts, the human brain can surpass even the most powerful supercomputer in the world in terms of computing power, efficiency and energy consumption. “As engineers, we see the human brain as a mixed-signal processor. While replicating the human brain was not an ideal way forward, it was clear that digital could be augmented with analog to move in a similar direction as the human brain,” said Kumar.

Researchers generalized the margin propagation design framework using a multi-spline approach to design a basic prototype function that is robust with respect to bias, process nodes, and temperature variations. Finally, researchers used the basic prototype function to synthesize shape-based analog arithmetic circuits (S-AC) that exhibit various approximation functions commonly used in ML architecture.

loans: NeuRonICS Lab, DESE, IISc

According to the authors, this research paves the way for the development of high-performance analog computing systems for machine learning (ML) and artificial intelligence (AI) tasks that are robust to transistor operating regimes, modular like digital design, and at the same time technology-scalable. This gives their designed model the strength of modularity and scalability of digital design along with the energy and space efficiency of the analogue world.

Basic Challenges

With Moore’s Law coming to an end and Dennard’s scaling already hit the wall, the industry has started to focus on digital accelerators (like GPUs, TPUs, and IPUs) that are now insufficient to run the demanding workloads efficiently. “We’ve reached a limit where we can’t squeeze more performance per watt out of a low-tech node; Hence several things like dark silicon and others come into the picture. This challenge is further exacerbated by the exponentially increasing size of ML algorithms, which now require billions of dollars of computation. All of this has created a fundamental but serious hardware bottleneck in the design of digital AI accelerators. First, we can’t do more calculations due to fundamental physical limitations, and second is the energy consumption,” Kumar said.

“To date, the power density and performance advantages of analog designs are unmatched by their digital counterparts. However, the popularity of analog designs has long been hampered by the lack of robust modular architectures that can be scaled and synthesized across process technology,” he added.

Future Impact

Regarding the implications of research for the future, Kumar explained that research is focused on solving some of these fundamental challenges. The designed architectures are also technology and bias scalable, meaning the same architecture can be used in server applications where you don’t need to worry about performance but speed is important. For edge applications such as wearables, energy efficiency is a priority.

The focus of this work was the design of S-AC circuits for machine learning processors. However, the approach can be generalized to other analog processors as well. In fact, the researchers successfully demonstrated the three-layer neural network approach. This approach can prove useful to synthesize large-scale analog deep neural networks and reconfigurable machine learning processes.

“We are delighted with the very positive response from the community. This feedback encourages us to continue our work in this area. In fact, a few more related papers are aligned and will come by the end of the year, enriching the proposed methodology of shape-based analog computing, Kumar said.

Share.

Comments are closed.