Intel’s EyeQ 5 vs. Nvidia’s Xavier: Wrong Debate

Source:Junko Yoshida
2017/12/07
PV:17

Does comparing Intel’s EyeQ 5 with Nvidia’s Xavier make sense? That is the question.

Intel CEO Brian Krzanich at his keynote presentation during Automobility LA on Nov. 29, 2017 (Source: Intel)
Intel CEO Brian Krzanich at his keynote presentation during Automobility LA on Nov. 29, 2017 (Source: Intel)

Nvidia and Intel are engaged in a specsmanship battle over AI chips for autonomous vehicles that reached a new high — or more accurately a new low — when Intel CEO Brian Krzanich recently spoke at an auto show in Los Angeles. Krzanich claimed that EyeQ 5 — designed by Intel subsidiary Mobileye — “can deliver more than twice the deep-learning performance efficiency" than Nvidia’s Xavier SoC.

Intel claimed that the Mobileye EyeQ5 SoC delivers 2.4 TOPS per watt for 2.4 times greater deep learning performance efficiency than Nvidia’s Xavier during Automobility LA (Source: Intel)

After the Intel CEO’s keynote, Danny Shapiro, Nvidia's senior director of automotive, called EE Times from L.A. and cried foul.

First, Shapiro explained that comparing the two chips with two different rollout dates on two different process nodes (Xavier on 16nm vs. EyeQ5 on 7nm) isn’t kosher. 

According to Shapiro, Xavier, which is “already in bring up now,” will be in volume production in 2019. Intel, in contrast, said that EyeQ 5 is “sampling in 2018, production/volume in 2020, and first customer car in 2021 (BMW iNext).”

Second, Shapiro pointed out that Nvidia’s Drive PX Xavier’s 30 watts of power consumption at 30 trillion operations per second (TOPS), quoted by Intel, is “for the entire system, CPU, GPU and memory, as opposed to just deep learning cores as in the EyeQ 5.”

Out of line

So, was it out of line for Intel to compare EyeQ5 to Xavier? 

“Of course it was,” said Jim McGregor, founder and principal analyst at Tirias Research. But he sees an even a bigger issue in that “nobody is comparing a platform to a platform today,” in autonomous vehicle solutions. 

Indeed, just comparing the specs of the two SoCs alone seems almost silly without discussing what other chips — in addition to the said SoCs — are needed to complete a Level 4 or Level 5 autonomous vehicle platform. 

In our recent interview with Intel, the CPU giant revealed plans to unveil soon “a multi-chip platform for autonomous driving,” which will combine the EyeQ 5 SoC, Intel’s low-power Atom SoCs, and other hardware including I/O and Ethernet connectivity. But the company offered no details on TOPS or watts on the entire platform available yet.

Simply put, Xavier has a far more powerful AI engine than EyeQ5.

Mike Demler, senior analyst at the Linley Group, agreed. “Forget the process-node nonsense. To achieve Level 4/5, it starts with the TOPS of the neural-network engine and the compute performance of the CPUs,” he said. “Then you look at the power, because if you don’t have the performance, it really doesn’t matter.”

If so, why are the two giants fighting tooth and nail to one-up on each other with their SoC specs? The answer is that, while they are still in an early phase of the autonomous vehicle technology battle, neither company wants it said that their solution consumes too much power or is less capable than their rival.

This escalating specsmanship, however, speaks volume about Intel’s commitment to muscle back into the automotive chip market, noted McGregor.

McGregor reminded us that Intel was once the dominant supplier of microcontrollers to Ford Motors. Intel’s 8061 microcontrollers and its derivatives were reportedly used in almost all Ford automobiles built from 1983 to 1994. When Ford transitioned from Intel microcontrollers to Motorola, Intel lost presence in the automotive market. By 2005, Intel announced it would discontinue production of all its automotive microcontroller chips.

Now, Intel is counting on its $15 billion Mobileye acquisition to redeem itself in the automotive world.

L4/L5 cars need how much TOPS?

Nvidia insisted that a genuine apples-to-apples comparison on the power efficiency of the two chips can be only made when Xavier and EyeQ5 are compared at 24 TOPS, assuming that both were designed on a 7nm process node. 

Based on these parameters (24TOPS at 7nm), Shapiro calculated, “With our current Xavier product, 24TOPS would consume approximately 12W of power for just the DLA (deep learning acceleration) +GPU.” He added, “If we move from the current 16nm to 7nm, we expect power to be reduced by approximately 40 percent, so that would put Xavier at about 7W.”

The Nvidia executive concluded, “Hence the accurate chart would be: at 24 TOPs performance, EyeQ 5 is at 10W and Nvidia Xavier is at 7W.”

Demler doesn’t buy such an argument.

The debate, he said, isn’t about an SoC-to-SoC spec comparison, but the actual performance necessary for L4/L5 autonomous cars. On one hand, “Intel/Mobileye claim their 24 TOPs is sufficient,” said Demler. On the other hand, Nvidia is “building Pegasus [platform] to max out at 320 TOPS.” At issue are, said Demler: “Who has more L4/L5 engagements? How much performance will we need?” Demler said, “Nobody knows, even if Mobileye thinks they do.”

Moving targets

When Nvidia originally announced the Xavier chip, the company quoted 20TOPS at 20W. But now, Nvidia is saying it’s 30TOPS at 30W. What happened?

While acknowledging its originally announced spec, Shapiro explained, “if we cranked the clocks we can scale to 30TOPS at 30W.”

Xavier, described by Nvidia as its “all-new AI supercomputer” in Sept., 2016, is designed for use in self-driving cars. (Source: Nvidia)

Similarly, Intel today seems to quote an EyeQ 5 SoC spec very different from what was originally announced by Mobileye, before it was acquired by Intel. 

EyeQ 5, according to Mobileye’s initial announcement, would have processing power of 12 Tera operations per second at power consumption below 5W. But last week when EE Times talked to Jack Weast, Intel's principal engineer and chief architect of autonomous driving solutions, he described EyeQ 5 delivering 24TOPS at 10W.

Asked when and how the SoC’s performance suddenly doubled, an Intel spokeswoman told us that Intel is now planning to deliver multiple EyeQ 5’s, “including the 12TOPs SKU announced previously and the 24TOPS SKU we compared to the Nvidia Xavier product.”

The block diagram of EyeQ 5 when Mobileye announced it in May, 2016 (Source: Mobileye)
The block diagram of EyeQ 5 when Mobileye announced it in May, 2016 (Source: Mobileye)

Pressed to explain if this means there are two cores integrated inside EyeQ5 24TOPS, or that Intel is using two chips inside a system, the Intel spokeswoman declined to elaborate. She said, “I can’t comment on how we came up with that level of TOPS yet. We’ll have more details on architecture and platform going into CES.”

Meanwhile, Demler was told by Mobileye that “to support Level 4/5, they will use two EyeQ 5s.”

The analyst community is in agreement that the target of Mobileye-designed EyeQ 5 performance is too low compared to Nvidia’s Xavier. Demler described the two EyeQ5’s as “just the ‘eyes’ of the system.” In Intel’s upcoming platform, the brains will come from Atom SoCs. 

“EyeQ has relatively weak old MIPS CPUs compared to Nvidia’s custom ARMv8 CPUs (eight of them in Xavier),” noted Demler. “Those custom ARM cores deliver very powerful ‘brains,’ so you don’t need an Atom or Xeon. Therefore, Intel shouldn’t deny that its ‘PC’ processors come into play.”

In short, Demler agreed with Nvidia’s Shapiro, saying that Nvidia’s Xavier offers DLA and GPU, while EyeQ5 provides “specialized computer-vision cores.”

So, if Xavier vs. EyeQ5 is an apples-to-oranges comparison, as many industry observers pointed out, what should be used for more accurate comparison?

Demler took a shot by suggesting to “match up Drive PX with the equivalent of Intel’s Go system.” Intel’s Go system is a development platform for autonomous driving.

Intel Go Development Platform (Source: Intel)

“Remove the FPGAs Intel originally described and drop in EyeQ processors. That’s what they’re building for BMW,” Demler surmised. “So let’s say a minimum performance dual-EyeQ5 + Atom = ~50 watts (just a rough estimate) with 40 TOPs of neural network performance, compared to dual Xavier at 60TOPS/60Watts. That’s a more apples/apples comparison. You can’t leave the Intel brain chip out of the equation.”

Power efficiency

Phil Magney, founder and principal advisor for Vision Systems Intelligence (VSI), told EE Times that both Intel and Nvidia “are making a bigger deal out of this than may be necessary.”

While all this specsmanship was originally about the power efficiency of the rival SoCs, Magney said, “I would expect most L4/L5 vehicles are going to have electric powertrains so the wattage may be a moot point.”

He observed that when Tesla switched from Mobileye to Nvidia with autopilot, “we saw no noticeable degradation in range. So electric powertrains will handle this without much impact.”
In Magney's opinion, “The same could be said for 48-volt architectures and mild hybrids since they have the capacity to handle greater loads. On the other hand, a traditional ICE would take a hit but this hit would be at the expense of fuel efficiency and emissions.”

Nvidia’s Shapiro disagreed. He acknowledged that power consumption does matter. “If a robotic car is expected to drive for nine hours per day, you want to be as energy efficient as possible, because it will eventually affect the EV’s range." 

Noting that fully automated vehicles “will have redundancies in most of the systems including the AV systems,” Magney estimated, “Couple two of everything and that adds to massive loads on the electrical systems.” However, he remains more optimistic. “Battery technologies and battery management are improving and so too are ranges so this problem gets solved in time.”

Key word:

Previous:Renesas and HELLA Aglaia offer scalable ADAS solution

Next:SG-MONOS could enable MCUs with 100Mbyte of on chip flash