Ameya360:Ambarella Selects Samsung’s 5nm Technology for Automotive AI Controller

发布时间:2023-02-23 16:08
作者:Ameya360
来源:网络
阅读量:3166

  Samsung Electronics Co. Ltd’s Foundry business is providing its 5nm process technology to and Ambarella Inc. for its newly announced CV3-AD685 automotive AI central domain controller. This collaboration will help transform the next generation of autonomous driving vehicle safety systems by bringing new levels of AI processing performance, power and reliability.

Ameya360:Ambarella Selects Samsung’s 5nm Technology for Automotive AI Controller

  The CV3-AD685 is the first production version of Ambarella’s CV3-AD family of automotive AI central domain controllers, with Tier-1 automotive suppliers announcing they will offer solutions using the CV3-AD system-on-chip (SoC) product family. Samsung’s 5nm process technology is optimized for automotive-grade semiconductors, with extremely tight process controls and advanced IP for exceptional reliability and outstanding traceability.

  Ambarella will rely on Samsung’s 5nm process maturity and the technology’s solid track record. This 5nm process is backed by the company’s extensive experience in automotive foundry process, IP, and service package development to enable manufacturers to create cutting-edge innovations in assisted and automated mobility.

  “Ambarella and Samsung Foundry have a rich history of collaboration, and we are excited to bring their world-class 5nm technology to our new CV3-AD685 SoCs,” said Fermi Wang, President and CEO at Ambarella. “Samsung’s proven automotive process technology allows us to bring new levels of AI acceleration, systems integration and power efficiency to ADAS and L2+ through L4 autonomous vehicles.”

  The CV3-AD685 integrates Ambarella’s next-generation CVflow AI engine, which includes neural network processing that is 20 times faster than the previous generation of Ambarella’s CV2 SoCs. It also provides general-vector and neural-vector processing capabilities to deliver the overall performance required for full autonomous driving (AD) stack processing, including computer vision, 4D imaging radar, deep sensor fusion and path planning.

  “Samsung brings 5nm EUV FinFET technology to automotive applications for unprecedented ADAS and vision processor performance,” said Sang-Pil Sim, executive vice president and head of Foundry Corporate Planning at Samsung Electronics. “With Tier-1 automotive suppliers already adopting the technology, we believe other automotive companies will also consider using the Ambarella CV3-AD SoC product family manufactured in Samsung’s 5nm process.”

  The CV3-AD685 will be the first in the CV3-AD product family to use Samsung’s 5nm process, and this SoC integrates advanced image processing, a dense stereo and optical flow engine, ARM Cortex A78AE and R52 CPUs, an automotive GPU for visualizations, and a hardware security module (HSM). It features an “algorithm first” architecture that provides support for the entire autonomous-driving software stack.

  The high-performance, power efficient and scalable CV3-AD family, which is built specifically for ADAS, complements a wide range of solutions for assisted driving, while advancing vehicle automation. The integrated CV3-AD685 SoC enables information from various sensors to be fused for robust L2+ to L4 autonomous driving. Samsung Foundry’s industry-leading process technology and advanced 3D-packaging solutions are powering many of the latest mobile, HPC and automotive solutions.

  Samsung’s 5nm process is also backed by the Samsung Advanced Foundry Ecosystem (SAFE) program. The SAFE program ensures close collaboration between Samsung Foundry, ecosystem partners, and customers to deliver robust SoC designs based on certified key design components including Process Design Kits (PDK), reference flows with Design Methodologies (DM), a variety of Intellectual Properties (IP), and on-demand design support.


(备注:文章来源于网络,信息仅供参考,不代表本网站观点,如有侵权请联系删除!)

在线留言询价

相关阅读
Ameya360:AV Radar Moves to Domain Controller for First Time
  Ambarella demonstrated its radar and camera sensor fusion chip and software at CES 2023.  At CES 2023, Ambarella demonstrated its centralized architecture for radar processing in autonomous vehicles (AVs), which allows fewer radar sensors to be used in each AV.  Ambarella’s offering is a combination of its CV3 family of domain controller chips with AI algorithms and software from Oculii, which Ambarella acquired in 2021.  Compared to existing system configurations that typically use radar modules with edge processing, processing radar data with Ambarella’s central processor setup allows you to achieve higher resolution with standard sensors via AI. The result is an AV perception system that uses fewer radar modules, needs less power, and allows processor resources to be dynamically allocated to appropriate sensors depending on conditions. It’s also easier to perform over-the-air updates to software, and cheaper to replace radar modules if they get damaged, according to Ambarella.  “Every radar that’s ever been built for automotive is processed at the edge—the entire processing chain lives inside the sensor module,” Steven Hong, former CEO of Oculii, now VP and general manager of radar technology at Ambarella, told EE Times. “The reason is that for a traditional design, you need more antennas to achieve higher resolution. Imaging radars need at least a degree of resolution, and to achieve that, you typically need hundreds if not thousands of antennas. Each antenna generates a lot of data, and because you’re generating so much data, you can’t move it anywhere else.”  In a typical setup, radars can collect terabytes per second of data, and if higher resolution is required, that means more antennas and more bandwidth. This limits radar processing to what can be performed with a small processor in the sensor module, and increases the sensor module’s power consumption to tens of Watts.  “With our technology, we don’t need more antennas to achieve higher resolution,” Hong said. “We use an intelligent, adaptive waveform, which is different to traditional radars.”  Oculii’s AI dynamically adapts the radar waveform generated. This non-constant signal means missing information can be derived rather than measured directly.  “We change the information we send out in a way that effectively encodes an additional set of information onto what we receive,” Hong said. “So not only are we receiving information about the environment, we’re receiving it in a way which is actively changed and actively controlled by what we’re sending.”  Encoded in the waveforms sent out are different patterns of timing and phase information.  “The different patterns allow us to effectively calculate what we’re missing rather than measure it,” Hong said. “This is, in many ways, a computational way of solving what was traditionally a brute force hardware solution for the problem.”  The result is that similar measurements to traditional radar can be made with only “tens to hundreds” of antennas, according to Hong. This drastically reduces the bandwidth required to transport this data, making it feasible to use a central domain controller/processor.  The effects of using a larger, more powerful central domain controller for this data, rather than processing at the edge, are many. Ambarella’s setup allows radar data to produce structural integrity information that is “Lidar-like,” with better range and higher sensitivity than Lidar can offer—all with a cheaper radar sensor than the ones in most cars today.  “Our resolution is below half a degree, we generate tens of thousands of points per frame and we run this at 30 frames per second and up, so we’re generating almost a million points per second,” Hong said. “The sensor itself is actually smaller, thinner, cheaper, and lower power than the existing radars that are already out there in hundreds of millions of vehicles.”  A central domain controller also allows compute resources to be allocated where they’re needed most; in practice, this could mean more focus on front radars versus back radars when driving on a highway versus in a parking lot, or it could mean dedicating more resources to radar when driving in conditions that cameras struggle with, such as fog.  Processing camera and radar data in the same chip also brings new opportunities for low-level sensor fusion. Raw camera data and raw radar data can be combined for better analysis.  “Because we can now move all the radar data to the same central location where you also process all the camera data at the native level, this is the first time you can do very deep, low-level sensor fusion,” Hong said.  Today, fusing radar and camera data after information is lost from the radar data with edge processing makes AIs rather brittle, according to Hong.  “They are in many ways overoptimized for certain scenarios and underoptimized for others,” he said, adding that 3D structural information from radar complements camera information well, especially in the case where a camera system comes across an object it hasn’t been trained on—the camera has to know what it is in order to detect it, whereas the radar doesn’t have that constraint.  “In many ways, this is something that our central computing platform allows: it allows you to have these two raw data sources combine, and it allows you to shift your resources between them depending on what’s actually needed,” Hong said.
2023-02-16 15:26 阅读量:3060
Ambarella Shifts From GoPro to Robo
  While GoPro suffers from the market saturation of mobile action cameras for sports enthusiasts, Ambarella, a very high-resolution image processor company that once generated as much as 30 percent of its revenue from GoPro, showcased at the Consumer Electronics Show its newly architected computer vision chip, CV1, primarily designed for highly automated vehicles.  On the eve of CES, GoPro announced plans to exit the drone business, cut 250 jobs and lower its fourth quarter revenue estimate. Fermi Wang, told EE Times that Ambarella has already experienced a decline in GoPro-based revenue. Last year, it was "10-plus percent," Wang said. He expects the company's GoPro revenue to sink to a "very low number" this year.  Making up for the lost revenue are the surveillance (professional and consumer) and auto OEM markets, he noted. Today, the company derives roughly 15 percent of the its revenue from the automotive sector.  What Ambarella sees as its ace in the hole, though, is a new CVflow architecture that delivers stereovision processing and deep learning perception algorithms. Ambarella’s goal for CV1 and a series of new computer vision chips based on CVflow (to follow later) is to get a head start in the self-driving vehicle market while capturing other automotive applications, including ADAS, electronic mirror, and surround view.  In the summer of 2015, Ambarella acquired for $30 million VisLab, a startup spun from the University of Parma, Italy. A team led by Professor Alberto Broggi, a founder of VisLab, is the backbone of Ambarella’s AV software stacks for highly automated vehicles.  In Ambarella’s off-site demo in Las Vegas, Broggi showed off two cameras — a short-range monocular (up to a few meters), and a stereoscopic camera for views 150 meters. Both are based on CV1. By applying CNN, a monocular camera can detect and classify objects for known classes like pedestrian, vehicles, motorcycles. The stereoscopic camera detects generic objects —which the camera is not trained to classify — in 3D structures, much like the way a lidar sees things in point clouds.  Compared to a lidar that generates 2 million 3D points per second, Broggi said, the long-range stereoscopic camera captures "800 to 900 million 3D points per second."  The secret of Ambarella’s CV1 is its ability to bring in so much more information to computer vision, because CV1 supports computer-vision processing up to 4K or 8-megapixel resolution.  While the VisLab team brings advancements in deep learning to the CVflow architecture, Ambarella applies years of expertise in low-power HD and Ultra HD image processing to the CVflow.  Broggi told us, "There couldn’t have been a better union than VisLab and Ambarella." There is no overlap between what each team does. More important, CVflow exploits Ambarella’s image signaling pipelines for high-dynamic range (HDR) imaging, Ultra HD processing and automatic calibration in a stereo camera.  While not many companies talk about it, Broggi said stereo cameras need to be very stable. Calibration in stereo cameras can be a challenge, he said, especially in automotive applications, because cars vibrate and operate in a wide range of temperatures. With Ambarella’s new CV1 chip, "We do real-time auto calibration on the fly on the chip," he said.  There is no need for infrared cameras to process images in low light, either, he added. Asked about Foresight’s new quad-cam unit designed to fuse data coming from infrared (night vision) and day cameras, Broggi said, "We don’t need that. Our HDR can process images in very low-light conditions."  But the clincher is that the CVflow architecture is fully-programmable and highly-efficient, providing significant computer vision performance with very low power consumption. CV1 runs at 4 watts, according to Broggi.  "You don’t need a powerful GPU to do all these things like CNN-based classification and stereovision processing for detecting generic objects without training," Broggi said. "We are doing all of it just on CV1."  The CV1 by itself is a very high-performance computer. Inside are Ambarella’s home-grown "engine" to power CNN and DNN, image DSP, two quad-core ARM Cortex-A53, and other accelerators, Broggi explained.  Asked about the price for CV1, Chris Day, vice president of marketing and business development at Ambarella, told us, "A lot cheaper than a GPU…below $50."  Ambarella already got its chip — fabricated by using a 14nm CMOS process technology — back from the foundry last May. The company is currently engaging with a number of customers in the automotive market, Day added.
2018-01-12 00:00 阅读量:3655
  • 一周热料
  • 紧缺物料秒杀
型号 品牌 询价
BD71847AMWV-E2 ROHM Semiconductor
RB751G-40T2R ROHM Semiconductor
TL431ACLPR Texas Instruments
CDZVT2R20B ROHM Semiconductor
MC33074DR2G onsemi
型号 品牌 抢购
BP3621 ROHM Semiconductor
BU33JA2MNVX-CTL ROHM Semiconductor
STM32F429IGT6 STMicroelectronics
IPZ40N04S5L4R8ATMA1 Infineon Technologies
ESR03EZPJ151 ROHM Semiconductor
TPS63050YFFR Texas Instruments
热门标签
ROHM
Aavid
Averlogic
开发板
SUSUMU
NXP
PCB
传感器
半导体
相关百科
关于我们
AMEYA360微信服务号 AMEYA360微信服务号
AMEYA360商城(www.ameya360.com)上线于2011年,现 有超过3500家优质供应商,收录600万种产品型号数据,100 多万种元器件库存可供选购,产品覆盖MCU+存储器+电源芯 片+IGBT+MOS管+运放+射频蓝牙+传感器+电阻电容电感+ 连接器等多个领域,平台主营业务涵盖电子元器件现货销售、 BOM配单及提供产品配套资料等,为广大客户提供一站式购 销服务。