Neuromorphic developers work together to integrate the sensor and processor

0

Article by: Sally Ward-Foxton

Synsense and Prophesee have teamed up to integrate a single-chip event-based image sensor and neuromorphic processor.

SynSense and Prophesee are collaborating to develop a single-chip event-based image sensor that integrates Prophesee’s Metavision image sensor with Synsense’s DYNAP-CNN neuromorphic processor. The companies will work together to design, develop, manufacture and commercialize the combined sensor processor to produce ultra-low power sensors that are both small and inexpensive.

“We’re not a sensor company, we’re a processor company,” said Dylan Muir, senior director of global business development, algorithms and applications at SynSense EE times. “Since we pay attention to sensor processing with low power consumption, the closer we can place our hardware to the sensor, the better. Therefore, it makes a lot of sense to work with event-based vision sensor companies. “

SynSense initiation bacon chip
SynSense previously combined its cutting-edge neural network processor IP with a dynamic vision sensor from Inivation. (Source: SynSense)

SynSense also works with event-based image sensor company Inivation, with whom it developed a 128 x 128 resolution event-based camera module called Speck.

“We plan to move towards higher resolution pixel arrays with Prophesee,” said Muir. Notice of previous collaboration with Sony, Muir also cited Prophesee’s expertise in achieving low light sensitivity as an advantage. “In the long term, we are striving to be able to perform high-resolution image processing in the device in a very compact module, and this is more complex than just upscaling everything,” said Muir.

SynSense Inivation Speck camera module
SynSense / Inivation Speck camera module. (Source: SynSense)

A higher resolution sensor array takes up more space and requires more processing, so the processor cores must be larger. Muir said the silicon requirements for a high quality image sensor do not match the requirements for compact digital logic. Therefore, a stacked architecture or a back-to-back multi-chip solution seem the most likely solutions.

Algorithmic work is also required for a higher resolution sensor; currently, smaller pixel arrays are processed by a single convolutional neural network (CNN). A higher resolution would mean a huge CNN. Alternatively, an image could be tiled and run on multiple CNNs in parallel, or just a portion of the image could be examined. The work is ongoing, said Muir.

Event-based vision

Event-based vision sensors like those from Prophesee Instead of focusing on images, focus on changes between video frames. The technique is based on how the human eye records and interprets visual input, which drastically reduces the amount of data generated – and is more effective in low-light scenarios. It can also be implemented using much less power than other image sensors.

Prophesee’s event-based Metavision sensors have embedded intelligence in each pixel that allows anyone to independently activate and thereby trigger an event.

The SynSense mixed-signal processor for signal processing with low dimensionality (audio, bio-signals, vibration monitoring) consumes less than 500 µW. SynSense had no immediate plans to commercialize its technology, and the resources on the chip were insufficient to run a CNN, a requirement for image processing. It developed a second, digital architecture tailored to convolutional networks. This IP that will be integrated into the Prophesee sensor.

The move to a fully asynchronous digital architecture also meant the design could move to more advanced process technology while consuming less power.

The processor IP consists of a pointed convolution core tailored to event-based versions of CNNs. SynSense uses backpropagation based training to spiking neural networks. According to Muir, this approach increases the processing of temporal signals beyond standard CNNs that have been converted to run in the event space. Backward propagation is achieved by approximating the derivative of a peak during exercise. In contrast, the inference is purely peak-based.

SynSense’s Spiking Neuron uses integer logic with 8-bit synaptic weights, a 16-bit neuron state, 16-bit threshold, and one-bit input and output peaks. The neuron is simply “integrate and fire”, the simplest neuron model (compared to more complex models such as “leaky integrate and fire” the internal state of the simpler version does not break down if no input is made, which reduces computational requirements). The SynSense neuron adds an 8-bit number to a 16-bit number and then compares it to the 16-bit threshold.

“At first it was a bit of a surprise to us that we could take the neuron design down to this level of simplicity and get really good performance,” said Muir.

SynSense neuron
SynSense’s digital binary asynchronous neuron uses a simple “Integrate and Fire” design. (Source: SynSense)

SynSense’s digital chip is tailored for CNN processing, and each CNN layer is processed by a different processor core. Cores work asynchronously and independently; the entire processing pipeline is event driven. In a system monitoring demonstration with the intent to interact (whether the user is looking at the device or not), the SynSense stack processes inputs with less than 100 ms latency and less than 5 mW of dynamic power from the sensor and processor is consumed.

SynSense has released multiple iterations of its processor core, with the Speck sensor ready for commercialization in real-time vision sensor applications such as smartphones and smart home devices. The camera’s resolution of 128 x 128 is sufficient for short-range applications in the home (outdoor applications such as surveillance require a higher resolution).

SynSense was founded in 2017 from the University of Zurich. The company employs around 65 people, spread across a research and development office in Zurich, a system and product development base in Chengdu, China, and an IC design team in Shanghai. A pre-B round of financing was recently closed that included investments from Westport Capital, Zhangjiang Group, CET Hik, CMSK, Techtronics, Ventech China, CTC Capital and Yachang Investments (SynSense declined to provide the financing amount).

A hardware development kit for the event-based vision processor is now available for gesture recognition, presence recognition and intent-to-interact applications in smart home devices. Samples of the vision processor itself, a developer kit for audio and IMU processing, samples of the Speck camera module and the Speck module development kit will be available by the end of 2022.

This article was originally published on EE times.

Sally Ward-Foxton covers AI technology and related topics for EETimes.com and all aspects of European industry for EE Times Europe magazine. Sally has spent more than 15 years writing about the electronics industry in London, UK. She has written for Electronic Design, ECN, Electronic Specifier: Design, Components in Electronics and many more. She holds a Masters degree in Electrical and Electronic Engineering from the University of Cambridge.

Win a foldable backpack! Learn to Remote SSD Management in the Post-Pandemic Era!


Source link

Share.

Comments are closed.