Brian Krzanich, Intel’s CEO came to Los Angeles for the “AutoMobility LA” auto show flush with forecasts of how autonomous driving will change every aspect of future vehicles, from cabin design to entertainment and life-saving safety systems.
As a leading autonomous vehicle chip company, Intel also seized the moment to set the record straight on the efficiency of the EyeQ 5 chip, developed by Mobileye (now an Intel company), compared to Nvidia’s Drive PX Xavier SoC designed for autonomous driving.
During his speech, Krzanich, referring to the recently completed Mobileye acquisition, stressed that Intel “can deliver more than twice the deep-learning performance efficiency than the competition [meaning Nvidia].”
As Nvidia strives to brand itself as a leader in AI-based autonomous driving technology through relentless promotion of its Drive PX platform, Intel appears to have decided to charge full-tilt into the brewing battle of specsmanship.
Incorrectly quoted
Speaking with EE Times, Jack Weast, Intel's principal engineer and chief architect of autonomous driving solutions, described Intel as a company that tends to lean conservative when it comes to touting its chips’ performance. As the war of words escalates, though, Weast said, “We are tired of seeing us incorrectly quoted.”
Weast complained that its rivals and the media often incorrectly compared Nvidia’s Drive PX to Intel’s desktop PC chips. In an apple-to-apple comparison, Mobileye’s 5th generation vision-sensor fusion chip must instead be compared, he said, with Nvidia’s Xavier SoC. EyeQ5 delivers 24 trillion operations per second (TOPS) at 10 watts, Weast said. In contrast, the DRIVE PX Xavier offers 30 TOPS of performance, while consuming 30 watts of power. “We are 2.4 times more efficient,” said Weast.
Of course, Nvidia is now promoting its latest Pegasus SoC, scheduled for delivery in 2018, designed to perform 320 TOPS — more than 10x the performance of its predecessor — at 500 watts of power. “Pegasus is new, but its efficiency isn’t getting better,” Weast said.
Nvidia’s Pegasus couples two of NVIDIA's Xavier SoCs with two next-generation discrete GPUs with hardware acceleration.
The mystery that remains, even to some experts, is how Intel plans to combine Mobileye’s “eye” with an Intel microprocessor “brain” in a highly automated vehicle.
Intel, in fact, might partly share the blame for the market’s confusion. The CPU giant has remained silent about what sort of SoCs it’s been developing on its own, separate from Mobileye’s EyeQ5.
Intel to launch multi-chip platform
According to Weast, Intel is planning to unveil soon — leading up to the Consumer Electronics Show in January — “a multi-chip platform for autonomous driving.” The solution will combine the EyeQ 5 SoC, Intel’s low-power Atom SoCs, and other hardware including I/O and Ethernet connectivity, he explained.
When Intel unveiled its GO development platform for autonomous driving earlier this year, it described its Atom processor C3000 as a chip that “delivers high performance per watt, packing substantial compute into low-power designs.”
Asked how the Atom SoC shares processing tasks with EyeQ 5, Weast said, “We looked at the entire set of workload necessary for autonomous vehicles.” Then, he said, “We allocated and partitioned the compute loads” among multiple chips.
Asked if FPGA is a part of that multi-chip solution, Weast said no. “There are some customers looking at FPGAs for certain applications such custom I/O or security, but it’s not part of our new multi-chip platform.”
Division of labor
When Mobileye originally announced EyeQ 5 before being acquired by Intel, the Israeli company touted the new SoC as a “brain” of autonomous vehicles, tasked to do “the vision central computer performing sensor fusion” for fully autonomous driving (Level 5) vehicles.
If so, where does Intel’s Atom SoC come in?
Autonomous driving requires different levels of sensor fusion, Weast explained. In deep-learning acceleration, some sensor fusion demands a chip to process highly parallel multi-threaded chunks of codes. “For that, EyeQ 5 is ideal.” Meanwhile, there is also a need for higher-level, environmental sensor fusion, which looks at trajectories and validations, Weast explained. “A CPU is a better fit to perform such tasks.”
In Intel’s view, Weast said, in enabling highly automated driving, “We didn’t have to cram everything into one SoC, such as EyeQ 5.” Intel had “the luxury of an opportunity to figure out where the spare cycle is, and how the compute workload should be partitioned,” he explained.
As soon as the Mobileye deal was completed last August, everyone on the team “immediately dove into the project,” Weast said.
Intel will shortly detail its multi-chip platform designed for autonomous driving, Weast promised. “At that point, we should be able to offer a platform-level comparison” between Nvidia’s Drive PX and Intel’s solution, he said.
在线留言询价
型号 | 品牌 | 询价 |
---|---|---|
MC33074DR2G | onsemi | |
TL431ACLPR | Texas Instruments | |
BD71847AMWV-E2 | ROHM Semiconductor | |
CDZVT2R20B | ROHM Semiconductor | |
RB751G-40T2R | ROHM Semiconductor |
型号 | 品牌 | 抢购 |
---|---|---|
BU33JA2MNVX-CTL | ROHM Semiconductor | |
IPZ40N04S5L4R8ATMA1 | Infineon Technologies | |
ESR03EZPJ151 | ROHM Semiconductor | |
STM32F429IGT6 | STMicroelectronics | |
TPS63050YFFR | Texas Instruments | |
BP3621 | ROHM Semiconductor |
AMEYA360公众号二维码
识别二维码,即可关注