Our brain can process complex information with a dizzying speed, using little to no energy. This results in an increasingly rising attention to human brain as a model for computer technologies development. Scientists deem neuromorphic computing the next big thing in computational technology growth, which can significantly change the way we process information and solve comprehensive tasks in various domains.

What are neuromorphic computing, how do they work, what do they have in common with the human brain, and what challenges do they still need to overcome? Read on in this article.

What is Neuromorphic Computing?

Neuromorphic computing is an interdisciplinary field that combines principles from neuroscience, computer science, and electrical engineering. The term “neuromorphic” means “brain-like.” Computers are designed and assembled in a way to mimic the structure and functions of the human brain.

The primary goal of neuromorphic computing is to create hardware that operates similarly to neural networks in biological organisms, allowing for more efficient computing with lower energy consumption and higher speed than the ordinary computers we use today.

This field is still relatively new. It has very few real-world use cases, apart from research conducted by universities, governments, and large tech companies like IBM and Intel Labs.

History of Neuromorphic Computing

The concept of neuromorphic computing emerged in the 1980s. The term was first used by an American scientist, Carver Mead, in his work on VLSI systems (very large-scale integration). Mead believed that the computational abilities of the brain far exceeded those of traditional computing systems, especially in tasks related to image recognition and sensory data processing.

The first generation of neuromorphic systems used analog circuits to mimic the behavior of neurons and synapses in the brain. Their advantage was the ability to perform real-time computations, which was especially useful in robotics and sensory data processing applications. However, these systems were prone to variability and noise, limiting their scalability and reliability.

The second generation appeared in the early 2000s. These systems used digital circuits to simulate the spiking behavior of neurons, offering greater precision and scalability than their analog counterparts. However, these systems also faced challenges, particularly in terms of energy efficiency and the complexity of learning algorithms implementation.

The modern 3rd generation of computing is characterized by the integration of memory and processing in a single device, often referred to as “memristive”. These devices, including phase-change memory and resistive random-access memory, can store and process information at the same location, similar to neurons and synapses in the brain. This significantly reduces energy consumption and increases computational efficiency.

How neuromorphic computing works

Neuromorphic architecture is often modelled based on the cerebrum’s neocortex. It is considered that this is where the “higher-level” cognitive functions occur, such as sensory sensations, movement commands, spatial thinking and language. The multi-level neocortex structure and intricate connections are crucial for its capacity to process sophisticated information and provide for human cognition.

Neocortex consists of neurons and synapses sending data from the brain with an instant speed. This is why you immediately pull your feet back after coincidentally stepping on a nail.

Neuromorhic computers try mimicking such efficiency by forming the so-called spiking neural networks. They are created when the artificial data-storing neurons act similarly to the biological ones in uniting among themselves with artificial synaptic devices transmitting electronic signals.

It is all based on neuromorhic chips, dedicated to neurons and synapses emulation in a biological brain. They are composed of numerous neuron-like threshold switchers, interconnected into a tight sequential network. Each switch within the network can send, receive and process signals, much like a biologic neuron. The power of connection between the switchers is similar to the synaptic weights in a biological brain, and it can be regulated, which allows for the system to gradually learn and adapt.

How neuromorphic computing differs from the conventional one

The architecture of neuromorphic computing follows a different principle than traditional computing, known as the von Neumann architecture.

Von Neumann computers process information in binary format, meaning everything is either a one or a zero. By design, they are sequential, with a clear separation between data processing (CPUs) and memory storage (RAM).

In contrast, neuromorphic computers can have millions of artificial neurons and synapses that simultaneously process different pieces of information. This gives the system far greater computational capabilities compared to von Neumann computers. Neuromorphic computers also more closely integrate memory and processing, which speeds up tasks that involve large data volumes.

Von Neumann computers have been the standard for decades and are used for a wide range of tasks, from word processing to scientific modeling. However, they are not energy-efficient and often face data transfer bottlenecks that slow down performance. Over time, von Neumann architectures will struggle to deliver the necessary computational power increases, leading researchers to explore alternative architectures like neuromorphic and quantum computing.

Key components of neuromorphic systems

  • Neurons and synapses. Artificial neurons and synapses in neuromorphic chips replicate the behavior of biological neurons and synapses. They exchange information through spikes or impulses, similar to how the brain processes information.
  • Event-driven processing. Unlike traditional systems, which process data continuously, these systems are event-driven. They consume energy and process information only when an event occurs, leading to significant energy efficiency.
  • Parallel processing. Neuromorphic systems can handle multiple processes simultaneously, much like the human brain. This parallelism allows for faster and more efficient computations, especially in tasks related to pattern recognition and sensory data processing.

Domains of neuromorhic computing use

Machine learning and deep learning. Algorithms, created for the imitation of human brain learning process, can use the capacities of parallel system processing. This way, it’s done faster and is more energy-efficient, contrary to the traditional computing systems.

Robotics. The parallel neuromorphic systems computing can increase efficiency of robotized systems operation. For example, vision systems can enhance the capacities of visual processing for robots, allowing them to navigate better. These systems emulate the human eye capacity to focus on important objects and ignore every minor detail, which leads to a higher computing efficiency and visual data interpretation.

Data analytics. Systems can make live analysis of large data masses to find correlations and abnormalities pointing at potential issues or capacities.

Healthcare. The parallel processing capacity can ensure a faster and more precise analysis of medical images, which could potentially lead to a much earlier illness detection. Moreover, the capacity of neuromorphic systems to learn and adapt can be used for medical treatment personalization.

Neuromorphic computing role in artificial intelligence

The main neuromorphic computing advantage is its potential for low energy consumption. Traditional AI systems, especially those based on deep learning, require a lot of computing capacity and energy. Contrary to them, the neuromorphic systems, imitating the energy efficient human brain processing, can perform complex computing much better. For instance, TrueNorth by IBM, a chip with a million programmed neurons and 256 million programmed synapses, uses only 70 milliwatts of energy.

The neuromorphic computing also has potential for real-time processing. In AI systems, the information processing is often divided into separate learning stages and conclusions. However, in such systems as a biological brain, these processes can be performed simultaneously. It is connected to a parallelism, inherent to the neuromorphic architecture, and flexibility, which allows reacting in real-time to changes in input data.

One more important feature is their ability to work with uncertainty. Systems, where artificial intelligence functions, typically need help with incomplete or controversial data, while neuromorphic systems can process them more efficiently. A biological brain is characterized by stability, adaptiveness, it functions much better, despite ambiguity and changes.

The leading companies in neuromorphic computing

Intel, a prominent technological company, has reached great results in this domain thanks to a research chip Loihi. This chip, named after a submarine volcano, is meant to accelerate the development of neuromorphic algorithms and systems. It uses digital architecture inspired by the brain’s neurons and synapses, allowing it to learn and make decisions based on templates and associations.

IBM has developed a True North CMOS chip to emulate neurons and brain synapses, at the same time using minimum capacity. It consists of 5.4 bln transistors and 4,096 neurosynaptic cores, united into a network of a million programmed neurons and 256 million programmed synapses.

A BrainChip technological startup has created a neuromorphic system on an Akida crystal, which brings AI to a new level, unattainable by any other existing technology. Akida chip is developed to ensure a complete peripheral AI network with ultra-low consumption for vision, audio, olfactory apps and innovative sensors.

Challenges of neuromorphic computing

The human mind complexity is one of the most important issues which neuromorphic computing is facing now. But researchers are trying to re-create the gray matter with electronics, which allows them to find out more on the internal brain function. At the same time, the more scientists know about the human brain, the more new possibilities they discover.

One more issues is energy efficiency. While the human brain is energy-efficient with 20W consumption, the modern neuromorphic systems are far less productive.

The third challenge is the need for a universal programming language. Unlike traditional computing, which has a set of standardized programming languages, the neuromorphic computing has no default one. It complicates the exchange of experience between researchers and developers, and slows down the progress in this domain.

The fourth issue is the integration of neuromorphic systems with the traditional computing ones. Even though they succeeded in image recognition and sensor data processing, they could have been more efficient in tasks which require accurate calculations. It means that a hybrid system combining traditional and neuromorphic computing would be ideal for solving many tasks. But such an integration of these two systems is a complex issue, which is yet to be completely settled.

And the fifth issue is hardware limitation. The modern systems are silicone-based, which is characterized by limited speed and energy-efficiency. Despite the constant efforts to come up with new materials and technologies, such as memristors and phase-changing matter, they are still in their early stages of development.

These challenges aside, the potential advantages of neuromorphic computing, such as increase performance for AI/ML-related tasks, make them a prospective research domain. However, to overcome these limitations and to fully realize potential, there’s a long way to come.