Ambarella demonstrated its radar and camera sensor fusion chip and software at CES 2023.
At CES 2023, Ambarella demonstrated its centralized architecture for radar processing in autonomous vehicles (AVs), which allows fewer radar sensors to be used in each AV.
Ambarella’s offering is a combination of its CV3 family of domain controller chips with AI algorithms and software from Oculii, which Ambarella acquired in 2021.
Compared to existing system configurations that typically use radar modules with edge processing, processing radar data with Ambarella’s central processor setup allows you to achieve higher resolution with standard sensors via AI. The result is an AV perception system that uses fewer radar modules, needs less power, and allows processor resources to be dynamically allocated to appropriate sensors depending on conditions. It’s also easier to perform over-the-air updates to software, and cheaper to replace radar modules if they get damaged, according to Ambarella.
“Every radar that’s ever been built for automotive is processed at the edge—the entire processing chain lives inside the sensor module,” Steven Hong, former CEO of Oculii, now VP and general manager of radar technology at Ambarella, told EE Times. “The reason is that for a traditional design, you need more antennas to achieve higher resolution. Imaging radars need at least a degree of resolution, and to achieve that, you typically need hundreds if not thousands of antennas. Each antenna generates a lot of data, and because you’re generating so much data, you can’t move it anywhere else.”
In a typical setup, radars can collect terabytes per second of data, and if higher resolution is required, that means more antennas and more bandwidth. This limits radar processing to what can be performed with a small processor in the sensor module, and increases the sensor module’s power consumption to tens of Watts.
“With our technology, we don’t need more antennas to achieve higher resolution,” Hong said. “We use an intelligent, adaptive waveform, which is different to traditional radars.”
Oculii’s AI dynamically adapts the radar waveform generated. This non-constant signal means missing information can be derived rather than measured directly.
“We change the information we send out in a way that effectively encodes an additional set of information onto what we receive,” Hong said. “So not only are we receiving information about the environment, we’re receiving it in a way which is actively changed and actively controlled by what we’re sending.”
Encoded in the waveforms sent out are different patterns of timing and phase information.
“The different patterns allow us to effectively calculate what we’re missing rather than measure it,” Hong said. “This is, in many ways, a computational way of solving what was traditionally a brute force hardware solution for the problem.”
The result is that similar measurements to traditional radar can be made with only “tens to hundreds” of antennas, according to Hong. This drastically reduces the bandwidth required to transport this data, making it feasible to use a central domain controller/processor.
The effects of using a larger, more powerful central domain controller for this data, rather than processing at the edge, are many. Ambarella’s setup allows radar data to produce structural integrity information that is “Lidar-like,” with better range and higher sensitivity than Lidar can offer—all with a cheaper radar sensor than the ones in most cars today.
“Our resolution is below half a degree, we generate tens of thousands of points per frame and we run this at 30 frames per second and up, so we’re generating almost a million points per second,” Hong said. “The sensor itself is actually smaller, thinner, cheaper, and lower power than the existing radars that are already out there in hundreds of millions of vehicles.”
A central domain controller also allows compute resources to be allocated where they’re needed most; in practice, this could mean more focus on front radars versus back radars when driving on a highway versus in a parking lot, or it could mean dedicating more resources to radar when driving in conditions that cameras struggle with, such as fog.
Processing camera and radar data in the same chip also brings new opportunities for low-level sensor fusion. Raw camera data and raw radar data can be combined for better analysis.
“Because we can now move all the radar data to the same central location where you also process all the camera data at the native level, this is the first time you can do very deep, low-level sensor fusion,” Hong said.
Today, fusing radar and camera data after information is lost from the radar data with edge processing makes AIs rather brittle, according to Hong.
“They are in many ways overoptimized for certain scenarios and underoptimized for others,” he said, adding that 3D structural information from radar complements camera information well, especially in the case where a camera system comes across an object it hasn’t been trained on—the camera has to know what it is in order to detect it, whereas the radar doesn’t have that constraint.
“In many ways, this is something that our central computing platform allows: it allows you to have these two raw data sources combine, and it allows you to shift your resources between them depending on what’s actually needed,” Hong said.
在线留言询价
型号 | 品牌 | 询价 |
---|---|---|
RB751G-40T2R | ROHM Semiconductor | |
MC33074DR2G | onsemi | |
TL431ACLPR | Texas Instruments | |
BD71847AMWV-E2 | ROHM Semiconductor | |
CDZVT2R20B | ROHM Semiconductor |
型号 | 品牌 | 抢购 |
---|---|---|
TPS63050YFFR | Texas Instruments | |
STM32F429IGT6 | STMicroelectronics | |
ESR03EZPJ151 | ROHM Semiconductor | |
BU33JA2MNVX-CTL | ROHM Semiconductor | |
IPZ40N04S5L4R8ATMA1 | Infineon Technologies | |
BP3621 | ROHM Semiconductor |
AMEYA360公众号二维码
识别二维码,即可关注
请输入下方图片中的验证码: