An “unprecedented” new computer chip could help revolutionize AI, according to its developers.
The system should make it possible for increasingly complex artificial intelligences to live on chips without having to send information to the cloud.
Currently, most AI applications are not performed locally on a device due to limited battery life and processing power. Instead, information is sent over the Internet to another computer where it is analyzed or calculated and then sent back.
Eventually, experts hope artificial intelligence can be embedded in “edge” devices: objects like phones that can perform detailed AI tasks anytime, anywhere.
The new breakthrough is a step in that direction, as it allows a wide range of different AI tasks to be performed much faster and more efficiently than ever before.
Usually, such efficiency is considered at the expense of versatility, and chips can either use less power or do more tasks, but not both. However, the new system seems to overcome this problem.
This is done using a “resistive random access memory”. This allows the computation to be performed directly in memory rather than offloading it to separate processing units, speeding up processing time.
The system has already proven to be incredibly powerful, both in terms of energy consumption and the time each takes.
For example, it showed 99 percent accuracy when analyzing handwritten digits and 84.7 percent on a Google voice recognition task.
However, scientists hope to improve it to make it even faster, more efficient and ready for wider use.
A paper describing the results, “A Compute-in-Memory Chip Based on Resistive Random Access Memory,” was published today in Nature.