Intel’s EyeQ 5 vs. Nvidia’s Xavier: Wrong Debate

发布时间:2017-12-07 00:00
作者:Ameya360
来源:Junko Yoshida
阅读量:1234

  Does comparing Intel’s EyeQ 5 with Nvidia’s Xavier make sense? That is the question.

  Nvidia and Intel are engaged in a specsmanship battle over AI chips for autonomous vehicles that reached a new high — or more accurately a new low — when Intel CEO Brian Krzanich recently spoke at an auto show in Los Angeles. Krzanich claimed that EyeQ 5 — designed by Intel subsidiary Mobileye — “can deliver more than twice the deep-learning performance efficiency" than Nvidia’s Xavier SoC.

  After the Intel CEO’s keynote, Danny Shapiro, Nvidia's senior director of automotive, called EE Times from L.A. and cried foul.

  First, Shapiro explained that comparing the two chips with two different rollout dates on two different process nodes (Xavier on 16nm vs. EyeQ5 on 7nm) isn’t kosher.

  According to Shapiro, Xavier, which is “already in bring up now,” will be in volume production in 2019. Intel, in contrast, said that EyeQ 5 is “sampling in 2018, production/volume in 2020, and first customer car in 2021 (BMW iNext).”

  Second, Shapiro pointed out that Nvidia’s Drive PX Xavier’s 30 watts of power consumption at 30 trillion operations per second (TOPS), quoted by Intel, is “for the entire system, CPU, GPU and memory, as opposed to just deep learning cores as in the EyeQ 5.”

  Out of line

  So, was it out of line for Intel to compare EyeQ5 to Xavier?

  “Of course it was,” said Jim McGregor, founder and principal analyst at Tirias Research. But he sees an even a bigger issue in that “nobody is comparing a platform to a platform today,” in autonomous vehicle solutions.

  Indeed, just comparing the specs of the two SoCs alone seems almost silly without discussing what other chips — in addition to the said SoCs — are needed to complete a Level 4 or Level 5 autonomous vehicle platform.

  In our recent interview with Intel, the CPU giant revealed plans to unveil soon “a multi-chip platform for autonomous driving,” which will combine the EyeQ 5 SoC, Intel’s low-power Atom SoCs, and other hardware including I/O and Ethernet connectivity. But the company offered no details on TOPS or watts on the entire platform available yet.

  Simply put, Xavier has a far more powerful AI engine than EyeQ5.

  Mike Demler, senior analyst at the Linley Group, agreed. “Forget the process-node nonsense. To achieve Level 4/5, it starts with the TOPS of the neural-network engine and the compute performance of the CPUs,” he said. “Then you look at the power, because if you don’t have the performance, it really doesn’t matter.”

  If so, why are the two giants fighting tooth and nail to one-up on each other with their SoC specs? The answer is that, while they are still in an early phase of the autonomous vehicle technology battle, neither company wants it said that their solution consumes too much power or is less capable than their rival.

  This escalating specsmanship, however, speaks volume about Intel’s commitment to muscle back into the automotive chip market, noted McGregor.

  McGregor reminded us that Intel was once the dominant supplier of microcontrollers to Ford Motors. Intel’s 8061 microcontrollers and its derivatives were reportedly used in almost all Ford automobiles built from 1983 to 1994. When Ford transitioned from Intel microcontrollers to Motorola, Intel lost presence in the automotive market. By 2005, Intel announced it would discontinue production of all its automotive microcontroller chips.

  Now, Intel is counting on its $15 billion Mobileye acquisition to redeem itself in the automotive world.

  L4/L5 cars need how much TOPS?

  Nvidia insisted that a genuine apples-to-apples comparison on the power efficiency of the two chips can be only made when Xavier and EyeQ5 are compared at 24 TOPS, assuming that both were designed on a 7nm process node.

  Based on these parameters (24TOPS at 7nm), Shapiro calculated, “With our current Xavier product, 24TOPS would consume approximately 12W of power for just the DLA (deep learning acceleration) +GPU.” He added, “If we move from the current 16nm to 7nm, we expect power to be reduced by approximately 40 percent, so that would put Xavier at about 7W.”

  The Nvidia executive concluded, “Hence the accurate chart would be: at 24 TOPs performance, EyeQ 5 is at 10W and Nvidia Xavier is at 7W.”

  Demler doesn’t buy such an argument.

  The debate, he said, isn’t about an SoC-to-SoC spec comparison, but the actual performance necessary for L4/L5 autonomous cars. On one hand, “Intel/Mobileye claim their 24 TOPs is sufficient,” said Demler. On the other hand, Nvidia is “building Pegasus [platform] to max out at 320 TOPS.” At issue are, said Demler: “Who has more L4/L5 engagements? How much performance will we need?” Demler said, “Nobody knows, even if Mobileye thinks they do.”

  Moving targets

  When Nvidia originally announced the Xavier chip, the company quoted 20TOPS at 20W. But now, Nvidia is saying it’s 30TOPS at 30W. What happened?

  While acknowledging its originally announced spec, Shapiro explained, “if we cranked the clocks we can scale to 30TOPS at 30W.”

  Similarly, Intel today seems to quote an EyeQ 5 SoC spec very different from what was originally announced by Mobileye, before it was acquired by Intel.

  EyeQ 5, according to Mobileye’s initial announcement, would have processing power of 12 Tera operations per second at power consumption below 5W. But last week when EE Times talked to Jack Weast, Intel's principal engineer and chief architect of autonomous driving solutions, he described EyeQ 5 delivering 24TOPS at 10W.

  Asked when and how the SoC’s performance suddenly doubled, an Intel spokeswoman told us that Intel is now planning to deliver multiple EyeQ 5’s, “including the 12TOPs SKU announced previously and the 24TOPS SKU we compared to the Nvidia Xavier product.”

  Pressed to explain if this means there are two cores integrated inside EyeQ5 24TOPS, or that Intel is using two chips inside a system, the Intel spokeswoman declined to elaborate. She said, “I can’t comment on how we came up with that level of TOPS yet. We’ll have more details on architecture and platform going into CES.”

  Meanwhile, Demler was told by Mobileye that “to support Level 4/5, they will use two EyeQ 5s.”

  The analyst community is in agreement that the target of Mobileye-designed EyeQ 5 performance is too low compared to Nvidia’s Xavier. Demler described the two EyeQ5’s as “just the ‘eyes’ of the system.” In Intel’s upcoming platform, the brains will come from Atom SoCs.

  “EyeQ has relatively weak old MIPS CPUs compared to Nvidia’s custom ARMv8 CPUs (eight of them in Xavier),” noted Demler. “Those custom ARM cores deliver very powerful ‘brains,’ so you don’t need an Atom or Xeon. Therefore, Intel shouldn’t deny that its ‘PC’ processors come into play.”

  In short, Demler agreed with Nvidia’s Shapiro, saying that Nvidia’s Xavier offers DLA and GPU, while EyeQ5 provides “specialized computer-vision cores.”

  So, if Xavier vs. EyeQ5 is an apples-to-oranges comparison, as many industry observers pointed out, what should be used for more accurate comparison?

  Demler took a shot by suggesting to “match up Drive PX with the equivalent of Intel’s Go system.” Intel’s Go system is a development platform for autonomous driving.

  “Remove the FPGAs Intel originally described and drop in EyeQ processors. That’s what they’re building for BMW,” Demler surmised. “So let’s say a minimum performance dual-EyeQ5 + Atom = ~50 watts (just a rough estimate) with 40 TOPs of neural network performance, compared to dual Xavier at 60TOPS/60Watts. That’s a more apples/apples comparison. You can’t leave the Intel brain chip out of the equation.”

  Power efficiency

  Phil Magney, founder and principal advisor for Vision Systems Intelligence (VSI), told EE Times that both Intel and Nvidia “are making a bigger deal out of this than may be necessary.”

  While all this specsmanship was originally about the power efficiency of the rival SoCs, Magney said, “I would expect most L4/L5 vehicles are going to have electric powertrains so the wattage may be a moot point.”

  He observed that when Tesla switched from Mobileye to Nvidia with autopilot, “we saw no noticeable degradation in range. So electric powertrains will handle this without much impact.”

  In Magney's opinion, “The same could be said for 48-volt architectures and mild hybrids since they have the capacity to handle greater loads. On the other hand, a traditional ICE would take a hit but this hit would be at the expense of fuel efficiency and emissions.”

  Nvidia’s Shapiro disagreed. He acknowledged that power consumption does matter. “If a robotic car is expected to drive for nine hours per day, you want to be as energy efficient as possible, because it will eventually affect the EV’s range."

  Noting that fully automated vehicles “will have redundancies in most of the systems including the AV systems,” Magney estimated, “Couple two of everything and that adds to massive loads on the electrical systems.” However, he remains more optimistic. “Battery technologies and battery management are improving and so too are ranges so this problem gets solved in time.”

(备注:文章来源于网络,信息仅供参考,不代表本网站观点,如有侵权请联系删除!)

在线留言询价

相关阅读
Intel’s Next Gen CPU to Produce at TSMC with 3nm in First Half of Next Year
  Intel’s upcoming Lunar Lake platform has entrusted TSMC with the 3nm process of its CPU. This marks TSMC’s debut as the exclusive producer for Intel’s mainstream laptop CPU, including the previously negotiated Lunar Lake GPU and high-speed I/O (PCH) chip collaborations. This move positions TSMC to handle all major chip orders for Intel’s crucial platform next year, reported by UDN News.  Regarding this news, TSMC refrained from commenting on single customer business or market speculations on November 21st. Intel has not issued any statements either.  Recent leaks of Lunar Lake platform internal design details from Intel have generated discussions on various foreign tech websites and among tech experts on X (formerly known as Twitter). According to the leaked information, TSMC will be responsible for producing three key chips for Intel’s Lunar Lake—CPU, GPU, and NPU—all manufactured using the 3nm process. Orders for high-speed I/O chips are expected to leverage TSMC’s 5nm production, with mass production set to kick off in the first half of next year, aligning with the anticipated resurgence of the PC market in the latter half of the year.  While TSMC previously manufactured CPUs for Intel’s Atom platform over a decade ago, it’s crucial to note that the Atom platform was categorized as a series of ultra-low-voltage processors, not Intel’s mainstream laptop platform. In recent years, Intel has gradually outsourced internal chips, beyond CPUs, for mainstream platforms to TSMC, including the GPU and high-speed I/O chips in the earlier Meteor Lake platform—all manufactured using TSMC’s 5nm node.  Breaking from its tradition of in-house production of mainstream platform CPUs, Intel’s decision to outsource to TSMC hints at potential future collaborations. This move opens doors to new opportunities for TSMC to handle the production of Intel’s mainstream laptop platforms.  It’s worth noting that the Intel Lunar Lake platform is scheduled for mass production at TSMC in the first half of next year, with a launch planned for the latter half of the year, targeting mainstream laptop platforms. Unlike the previous two generations of Intel laptop platforms, Lunar Lake integrates CPU, GPU, and NPU into a system-on-chip (SoC). This SoC is then combined with a high-speed I/O chip, utilizing Intel’s Foveros advanced packaging. Finally, the DRAM LPDDR5x is integrated with the two advanced packaged chips on the same IC substrate.
2023-11-22 11:18 阅读量:1798
Intel’s CEO Envisions Over One Hundred Million AI PC Shipments in Two Years
  On November 7th, Intel held its “Intel Innovation Taipei 2023 Technology Forum”, with CEO Pat Gelsinger highlighting the healthy state of PC inventory. He also expressed optimism about the injection of several more years of innovative applications and evolution in PCs through AI.  Intel Aims to Ship over One Hundred Million AI PC within the Next Two Years  Gelsinger expressed that the PC inventory has reached a healthy level, and he is optimistic about the future growth of AI PCs, which are equipped with AI processors or possess AI computing capabilities. He anticipates that AI will be a crucial turning point for the PC industry.  Additionally, Gelsinger stated that the server industry may have seemed uneventful in recent years, but with the accelerated development of AI, it has become more exciting. AI is becoming ubiquitous, transitioning from the training phase to the deployment phase, and various platforms will revolve around AI.  Gelsinger expressed his strong confidence in Intel’s position in the AI PC market, expecting to ship over one hundred million units within two years.  Intel’s Ambitious Expansion in Semiconductor Foundry Landscape  Intel is actively promoting its IDM 2.0 strategy, with expectations from the industry that the company, beyond its brand business, has advanced packaging capabilities to support semiconductor foundry operations. In the future, Intel is poised to compete with rivals such as TSMC and Samsung.  Gelsinger noted that some have viewed Intel’s plan of achieving five technical nodes in four years as “an ambitious endeavor.” However, he emphasized that Intel remains committed to its original goal of advancing five process nodes within four years.  The company’s foundry business has received positive responses from numerous potential customers, and while it may take three to four years for significant expansion, the advanced packaging aspect may only require two to three quarters to get on track.  This transformation marks a significant shift for the company, setting new standards in the industry. Intel is making steady progress in its four-year plan to advance five nodes, and Moore’s Law will continue to extend. The construction of Intel’s new factories is also ongoing.  According to Intel’s roadmap, Intel 7 and Intel 4 are already completed, Intel 3 is set for mass production in the latter half of this year, and Intel 20A and 18A are expected to enter mass production in the first and second halves of next year, respectively.
2023-11-08 16:10 阅读量:1406
Intel, Facebook working on cheaper AI chip
 Intel and Facebookare working together on a new cheaper Artificial Intelligence (AI) chip that will help companies with high workload demands.At the CES 2019 here on Monday, Intel announced "Nervana Neural Network Processor for Inference" (NNP-I)."This new class of chip is dedicated to accelerating inference for companies with high workload demands and is expected to go into production this year," Intel said in a statement.Facebook is also one of Intel's development partners on the NNP-I.Navin Shenoy, Intel Executive Vice President in the Data Centre Group, announced that the NNP-I will go into production this year.The new "inference" AI chip would help Facebook and others deploy machine learning more efficiently and cheaply.Intel began its AI chip development after acquiring Nervana Systems in 2016.Intel also announced that with Alibaba, it is developing athlete tracking technology powered by AI that is aimed to be deployed at the Olympic Games 2020 and beyond.The technology uses existing and upcoming Intel hardware and Alibaba cloud computing technology to power a cutting-edge deep learning application that extracts 3D forms of athletes in training or competition."This technology has incredible potential as an athlete training tool and is expected to be a game-changer for the way fans experience the Games, creating an entirely new way for broadcasters to analyse, dissect and re-examine highlights during instant replays," explained Shenoy.Intel and Alibaba, together with partners, aim to deliver the first AI-powered 3D athlete tracking during the Olympic Games Tokyo 2020."We are proud to partner with Intel on the first-ever AI-powered 3D athlete tracking technology where Alibaba contributes its best-in-class cloud computing capability and algorithmic design," said Chris Tung, CMO, Alibaba Group. 
2019-01-09 00:00 阅读量:2453
Israel Approves $185 Million Grant for Intel Fab
The Israeli parliamentary finance committee approved a $185 million grant to Intel in return for meeting job creation targets and local contract guarantees.Last May, Intel announced it would spend $5 billion over two years to upgrade its Fab 28 in Kiryat Gat, Israel, from 22nm to 10nm production technology.Israel's grant is conditional on Intel meeting its already announced commitment to hire 250 new staff at the fab, and on awarding contracts worth around $560 million to local suppliers.Earlier this month, Ann Kelleher, Intel’s senior vice president and general manager of manufacturing and operations, said the company was planning for manufacturing site expansions in Oregon, Ireland and Israel, with multi-year construction activities expected to start in 2019.In a blog post, Kelleher said, “Having additional fab space at-the-ready will help us respond more quickly to upticks in the market and enables us to reduce our time to increased supply by up to roughly 60%. In the weeks and months ahead, we will be working through discussions and permitting with local governments and communities.”Kelleher said it was part of the company’s strategy to prepare the company’s global manufacturing network for flexibility and responsiveness to demand. As part of this, the company is spending to expand its 14nm manufacturing capacity, made progress on the previously announced schedule for the Fab 42 fit-out in Arizona, and located development of a new generation of storage and memory technology at its manufacturing plant in New Mexico.Kelleher also said that Intel would also supplement its own manufacturing capability with selective use of foundries for certain technologies "where it makes sense for the business." The company had already been doing this but will do so more as it aims to address a broader set of customers beyond the PC and into a $300 billion market for silicon in cars, phones, and artificial intelligence (AI) based products.
2019-01-04 00:00 阅读量:1288
  • 一周热料
  • 紧缺物料秒杀
型号 品牌 询价
RB751G-40T2R ROHM Semiconductor
MC33074DR2G onsemi
CDZVT2R20B ROHM Semiconductor
TL431ACLPR Texas Instruments
BD71847AMWV-E2 ROHM Semiconductor
型号 品牌 抢购
STM32F429IGT6 STMicroelectronics
ESR03EZPJ151 ROHM Semiconductor
BP3621 ROHM Semiconductor
BU33JA2MNVX-CTL ROHM Semiconductor
TPS63050YFFR Texas Instruments
IPZ40N04S5L4R8ATMA1 Infineon Technologies
热门标签
ROHM
Aavid
Averlogic
开发板
SUSUMU
NXP
PCB
传感器
半导体
相关百科
关于我们
AMEYA360微信服务号 AMEYA360微信服务号
AMEYA360商城(www.ameya360.com)上线于2011年,现 有超过3500家优质供应商,收录600万种产品型号数据,100 多万种元器件库存可供选购,产品覆盖MCU+存储器+电源芯 片+IGBT+MOS管+运放+射频蓝牙+传感器+电阻电容电感+ 连接器等多个领域,平台主营业务涵盖电子元器件现货销售、 BOM配单及提供产品配套资料等,为广大客户提供一站式购 销服务。