The rise of machine learning (ML) has enabled an entirely new class of use cases and applications. Specifically, edge computing and on-edge ML have augmented traditional devices with the ability to monitor, analyze and automate daily tasks.
Despite these advances, a major challenge remains: How do you balance the high-power demands of these ML applications with the low-power requirements of standalone, battery-powered devices? For these applications, traditional digital electronics are no longer the best option. Analog computing has emerged as the obvious choice to achieve ultra-low-power ML on the edge.
With the advent of on-edge ML, the industry has seen a proliferation of smart devices that respond to stimuli in the environment. Many households today, for example, host a virtual assistant like Amazon Alexa or Google Home that listens for a keyword before performing a task. Other examples include security cameras that monitor for movement in a frame and, on the industrial side, sensors that detect anomalies in the performance of an industrial machine.
Regardless of the specific application, all of these devices share a fundamental reliance on “always-on” ML.
In other words, these devices continually monitor their environment for some external trigger, such as an audio keyword or anomalous event. Upon detecting this stimulus, the device is triggered into action. In the example of a smart assistant, the device waits to hear the keyword, after which it sends the subsequent audio to the cloud for processing.
Two important factors should be noted about this scheme. First, for the scheme to work, the devices must always be on, constantly sensing the environment for the external trigger that could occur randomly at any time (or never). Second, for the best latency and privacy possible, stimulus detection must be performed on edge.
The result is a fundamental tradeoff between device power efficiency and performance. To detect the stimulus, the machine must be constantly on and performing ML computations to determine whether or not the event of interest has occurred. Because many of these devices run the associated ML algorithms on relatively high-power digital system-on-chips (SoCs), they burn most of their power simply by waiting for a trigger event.
Analog computing for ML
Amid these challenges, analog computing has recently resurged for the purpose of low-power ML.
As opposed to digital computation, which performs arithmetic operations via symbolic computation (1s and 0s), analog computation relies on precise voltage and current values. Analog computation is particularly well suited for ML applications for several reasons.
One primary reason is that almost all ML applications begin by analyzing data—originally analog—in their raw form. Because the output of most sensors is an analog signal, analog computation makes sense in that it enables computation in the data’s original format—saving power and time by removing the need for costly analog-to-digital conversion in the ML workflow.
Moreover, the computational requirements of ML algorithms align well with the capabilities of analog computers. One of the most fundamental tasks in ML, for example, is the multiply-accumulate (MAC) function. You can easily implement MAC functions in analog circuitry through the summation of currents in and out of a node. In fact, a MAC function requires many transistors for a digital implementation, whereas it can be done more efficiently in analog using just a few transistors.
Finally, as compared to digital computing, applications can perform analog MAC functions and other analog computing at significantly lower power.
An analog ML solution
Few analog computing solutions currently exist on the market. Of the available solutions, however, Aspinity’s is particularly unique.
Aspinity produces an analog ML chip designed to detect and classify sensor-driven events in their raw format.
Each stage consists of configurable analog blocks (CABs), which are function-specific analog blocks that users can program and configure via software to their specific needs.
Within these blocks is Aspinity’s proprietary analog nonvolatile memory (NVM). The NVM stores values like biases and weights and allows the system to fine-tune CABs to account for variability caused by inherent phenomena, such as mismatch and temperature. With the NVM, you can ensure that every analog circuit performs as expected and exactly the same with respect to one another.
Users can leverage these tools to customize solutions to their application’s needs, supporting a wide range of input sensor types and digital I/O compatibility.
Power efficiency with analog
Thanks to the emergence of highly reliable and performant analog computing, a promising solution to power efficiency exists for always-on ML.
Systems can leverage analog computing to perform the always-on ML processing necessary for smart devices.
In this architecture, Aspinity’s analog ML chip can sit in between the sensor and digital electronics. Here, the upstream analog chip will continually perform the necessary ML computations on the raw sensor data, and, upon detecting a trigger, will wake up the necessary downstream digital electronics.
The major benefit of this setup is unprecedented power savings. As opposed to traditional architectures in which power-hungry digital electronics are always on, this solution allows for the digital electronics to be turned off unless they are needed.
Meanwhile, the analog processor performs the always-on event detection at an infinitesimal power expenditure of ~20 ?A. So, a system that would have originally consumed around 7450 ?A can now consume as low as 70 ?A or less, depending on the application.
To further quantify this benefit, internal testing has shown that common applications like voice detection can experience a 92× improvement in power efficiency with the addition of analog computing, as compared to traditional schemes. Other applications, such as glass break detection, have demonstrated power savings of up to 105×.
Analog for the future of the edge
As on-edge ML becomes an increasingly important part of our technological world, the need for power efficiency has never been greater. Among all available solutions, analog computing—proven to reduce power consumption by orders of magnitude—is uniquely positioned to propel the field.
To ensure a future that can support ML, we’ve grown accustomed to a viable and sustainable way, and here analog computing will be a necessary tool.
在线留言询价
型号 | 品牌 | 询价 |
---|---|---|
RB751G-40T2R | ROHM Semiconductor | |
CDZVT2R20B | ROHM Semiconductor | |
BD71847AMWV-E2 | ROHM Semiconductor | |
MC33074DR2G | onsemi | |
TL431ACLPR | Texas Instruments |
型号 | 品牌 | 抢购 |
---|---|---|
IPZ40N04S5L4R8ATMA1 | Infineon Technologies | |
BU33JA2MNVX-CTL | ROHM Semiconductor | |
BP3621 | ROHM Semiconductor | |
ESR03EZPJ151 | ROHM Semiconductor | |
TPS63050YFFR | Texas Instruments | |
STM32F429IGT6 | STMicroelectronics |
AMEYA360公众号二维码
识别二维码,即可关注