Neuromorphic supercomputers are on the rise.
SpiNNcloud Systems, a subsidiary of Technische Universität Dresden in Germany, is developing a neuromorphic supercomputer with up to 10 billion neurons.
The SpiNNaker2 system is part of a €1 billion European project over the past decade, based on the work of Professor Steve Furber, one of the founders of ARM.
The SpiNNaker2 neuromorphic supercomputer is designed to process event-driven neural networks, deep neural networks, and AI based on symbolic rules. This differs from Intel's Hala Point system launched last month, which features 1 billion neurons.
"We are launching the largest commercial hybrid supercomputer, combining event-based properties with distributed deep neural networks and rule-based engines, as you can flexibly exchange information in real-time on ARM cores," said Hector Gonzalez, co-CEO of SpiNNcloud, to eeNews Europe.
Advertisement
The initial half-size implementation built in Dresden, Germany, will have up to 5 million neurons. It uses custom chips with 152 ARM M4F microcontroller cores, built on Global Foundries' 22nm FDX process with an insulator, utilizing Adaptive Body Bias (ABB) to reduce power consumption.
There are also neural network accelerator cores and memory management cores on the periphery, along with exponential and logarithmic functions and two true random number generators (RNGs) for sampling the thermal noise of the PLL in each core to perform random or random walk operations.
There are 48 chips on a card, 90 cards on a rack, and 8 chips on 8 racks, totaling 5.25 million cores.
This 5 million-core supercomputer has 500 million neurons because each core has 1,000 neurons, but the 10 million-core version is the maximum size we offer with 1,440 boards— the limitation is routing and maintaining real-time performance. The initial system updates in 1 millisecond, but the larger you get, the more challenging it becomes. With 1,440 boards, we can still achieve updates within 1 millisecond.The system employs a hybrid of chip-level synchronous operations and asynchronous communications between circuit boards and racks. The voltage of the chips can dynamically vary between thresholds of 0.45V and 0.6V, depending on the data requirements. For instance, the frequency can also change dynamically in response to data spikes transmitted through the system.
Gonzalez said, "This is energy-efficient computing, but you can also gain the ability to operate code in an event-based manner for artificial intelligence, not only around the spike framework, but we can also use sparsity at different scales, not just for computation but at the communication level and across different networks. This is difficult to achieve in GPU architectures."
Neuromorphic supercomputers will be offered in the cloud, capable of hosting many different types of AI frameworks simultaneously, for sensor networks or to enhance the accuracy and security of generative artificial intelligence.
Gonzalez said, "We have been in conversations with people in smart cities who are interested in real-time processing of sensor streams, using parallel small networks for drug discovery, which is very machine-friendly."
"You can scale the symbolic engine as well as the DNN, which could be the GPT layers of YOLO or LLM, which can be very good at not hallucinating, dealing with incomplete data, and this can also be used for sensor streams with reasoning capabilities."
"We are also very interested in running brain models, which are typically difficult to run on other networks due to connectivity requirements."
One of the first customers is the Sandia National Laboratories in the United States. "Brain-like computing requires programmable dynamics, event-based communication, and extreme scale," said Fred Rothganger of Sandia National Laboratories. "SpiNNaker2 is the most flexible neural supercomputer architecture available today. At Sandia, we are excited to build applications on this fantastic system."
The Furber team at the University of Manchester has been developing the software stack for the supercomputer over the past seven years, combining tools for spiking neural networks, graph-based DNNs, and symbolic logic through the TVM interface developed by the University of Washington.
In early 2023, the SpiNNaker2 project received a grant of 2.5 million euros. This funding was awarded by the European Innovation Council (EIC) as part of its package for deep-tech startups.
The EIC selected 27 projects from 289 proposals submitted to the Transition Grant Scheme to receive a total of 79.3 million euros in EU funding. The scheme aims to transform the research outcomes of European research projects into commercially viable businesses.Professor Angela Rösen-Wolff, Vice President for Research at Technische Universität Dresden, added: "Funding from the EIC will enable SpiNNaker2 to be expanded to mobile applications, such as human-machine interaction in the CeTI Cluster of Excellence, and to be tested in real industrial environments.
Co-CEO of SpiNNcloud Systems, Christian Eichhorn, stated: "Artificial intelligence like ChatGPT is now entering our daily lives, thus representing a revolution comparable to the internet. Training this AI model for one month consumes as much electricity as 3,000 households would use. The scale of future electricity consumption due to the use of artificial intelligence is currently unpredictable. We are developing the most energy-efficient computing hardware for large-scale applications, as this will be key to significantly reducing the carbon footprint of artificial intelligence.
Co-CEO of SpiNNcloud Systems, Hector Gonzalez, added: "Our inspiration comes from the human brain, which can process the most complex tasks with only 30 watts. The future of computing technology must be inspired by the brain."
Comment