Nvidia Enters ADAS Market via AI-Based <span style='color:red'>Xavier</span>
Nvidia is in Munich this week to declare war that it is coming after the advanced driver assistance system (ADAS) market. The GPU company is now pushing its AI-based Nvidia Drive AGX Xavier System — originally designed for Level 4 autonomous vehicles — down to Level 2+ cars.In a competitive landscape already crowded with ADAS solutions provided by rival chip vendors such as NXP, Renesas, and Intel/Mobileye, Nvidia is boasting that its GPU-based automotive SoC isn’t just a “development platform” for OEMs to prototype their self-driving vehicles.At the company’s own GPU Technology Conference (GTC) in Europe, Nvidia announced that Volvo cars will be using the Nvidia Drive AGX Xavier for its next generation of ADAS vehicles, with production starting in the early 2020s.NVIDIA's Drive AGX Xavier will be designed into Volvo's ADAS L2+ vehicles. Henrik Green (left), head of R&D of Volvo Cars, with Nvidia CEO Jensen Huang on stage at GTC Europe in Munich. (Photo: Nvidia)Danny Shapiro, senior director of automotive at Nvidia, told us, “Volvo isn’t doing just traditional ADAS. They will be delivering wide-ranging features of ‘Level 2+’ automated driving.”By Level 2+, Shapiro means that Volvo will be integrating “360° surround perception and a driver monitoring system” in addition to a conventional adaptive cruise control (ACC) system and automated emergency braking (AEB) system.Nvidia added that its platform will enable Volvo to “implement new connectivity services, energy management technology, in-car personalization options, and autonomous drive technology.”It remains unclear if car OEMs designing ADAS vehicles are all that eager for AI-based Drive AGX Xavier, which is hardly cheap. Shapiro said that if any car OEMs or Tier Ones are serious about developing autonomous vehicles, taking an approach that “unifies ADAS and autonomous vehicle development” makes sense. The move allows carmakers to develop software algorithms on a single platform. “They will end up saving cost,” he said.Phil Magney, founder and principal at VSI Labs, agreed. “The key here is that this is the architecture that can be applied to any level of automation.” He said, “The processes involved in L2 and L4 applications are largely the same. The difference is that L4 would require more sensors, more redundancy, and more software to assure that the system is safe enough even for robo-taxis, where you don’t have a driver to pass control to when the vehicle encounters a scenario that it cannot handle.”Better than discrete ECUsAnother argument for the use of AGX for L2+ is that the alternative requires the use of multiple discrete ECUs. Magney said, “An active ADAS system (such as lane keeping, adaptive cruise, or automatic emergency braking) requires a number of cores fundamental to automation. Each of these tasks requires a pretty sophisticated hardware/software stack.” He asked, “Why not consolidate them instead of having discrete ECUs for each function?”Scalability is another factor. Magney rationalized, “A developer could choose AGX Xavier to handle all these applications. On the other hand, if you want to develop a robo-taxi, you need more sensors, more software, more redundancy, and higher processor performance … so you could choose AGX Pegasus for this.”Is AGX Xavier safer?Shapiro also brought up safety issues.He told us, “Recent safety reports show that many L2 systems aren’t doing what they say they would do.” Indeed, in August, the Insurance Institute for Highway Safety (IIHS) exposed “a large variability of Level 2 vehicle performance under a host of different scenarios.” An EE Times story entitled “Not All ADAS Vehicles Created Equal” reported that some [L2] systems can fail under any number of circumstances. In some cases, certain models equipped with ADAS are apparently blind to stopped vehicles and could even steer directly into a crash.Nvidia’s Shapiro implied that by “integrating more sensors and adding more computing power” that runs robust AI algorithms, Volvo can make their L2+ cars “safer.”On the topic of safety, Magney didn’t necessarily agree. “More computing power doesn’t necessarily mean that it is safer.” He noted, “It all depends on how it is designed.”Lane keeping, adaptive cruise, and emergency braking for L2 could rely on a few sensors and associated algorithms while a driver at the wheel manages events beyond the system’s capabilities.However, the story is different with a robo-taxi, explained Magney. “You are going to need a lot more … more sensors, more algorithms, some lock-step processing, and localization against a precision map.” He said, “For example, if you go from a 16-channel LiDAR to a 128-channel LiDAR for localization, you are working with eight times the amount of data for both your localization layer as well as your environmental model.”Competitive landscapeBut really, what does Nvidia have that competing automotive SoC chip suppliers don’t?Magney, speaking from his firm VSI Labs’ own experience, said, “The Nvidia Drive development package has the most comprehensive tools for developing AV applications.”He added, “This is not to suggest that Nvidia is complete and a developer could just plug and play. To the contrary, there is a ton of organic codework necessary to program, tune, and optimize the performance of AV applications.”However, he concluded that, in the end, “you are going to be able to develop faster with Nvidia’s hardware/software stack because you don’t have to start from scratch. Furthermore, you have DRIVE Constellation for your hardware-in-loop simulations where you can vastly accelerate your simulation testing, and this is vital for testing and validation.”
Key word:
Release time:2018-10-11 00:00 reading:2595 Continue reading>>
Intel’s EyeQ 5 vs. Nvidia’s <span style='color:red'>Xavier</span>: Wrong Debate
  Does comparing Intel’s EyeQ 5 with Nvidia’s Xavier make sense? That is the question.  Nvidia and Intel are engaged in a specsmanship battle over AI chips for autonomous vehicles that reached a new high — or more accurately a new low — when Intel CEO Brian Krzanich recently spoke at an auto show in Los Angeles. Krzanich claimed that EyeQ 5 — designed by Intel subsidiary Mobileye — “can deliver more than twice the deep-learning performance efficiency" than Nvidia’s Xavier SoC.  After the Intel CEO’s keynote, Danny Shapiro, Nvidia's senior director of automotive, called EE Times from L.A. and cried foul.  First, Shapiro explained that comparing the two chips with two different rollout dates on two different process nodes (Xavier on 16nm vs. EyeQ5 on 7nm) isn’t kosher.  According to Shapiro, Xavier, which is “already in bring up now,” will be in volume production in 2019. Intel, in contrast, said that EyeQ 5 is “sampling in 2018, production/volume in 2020, and first customer car in 2021 (BMW iNext).”  Second, Shapiro pointed out that Nvidia’s Drive PX Xavier’s 30 watts of power consumption at 30 trillion operations per second (TOPS), quoted by Intel, is “for the entire system, CPU, GPU and memory, as opposed to just deep learning cores as in the EyeQ 5.”  Out of line  So, was it out of line for Intel to compare EyeQ5 to Xavier?  “Of course it was,” said Jim McGregor, founder and principal analyst at Tirias Research. But he sees an even a bigger issue in that “nobody is comparing a platform to a platform today,” in autonomous vehicle solutions.  Indeed, just comparing the specs of the two SoCs alone seems almost silly without discussing what other chips — in addition to the said SoCs — are needed to complete a Level 4 or Level 5 autonomous vehicle platform.  In our recent interview with Intel, the CPU giant revealed plans to unveil soon “a multi-chip platform for autonomous driving,” which will combine the EyeQ 5 SoC, Intel’s low-power Atom SoCs, and other hardware including I/O and Ethernet connectivity. But the company offered no details on TOPS or watts on the entire platform available yet.  Simply put, Xavier has a far more powerful AI engine than EyeQ5.  Mike Demler, senior analyst at the Linley Group, agreed. “Forget the process-node nonsense. To achieve Level 4/5, it starts with the TOPS of the neural-network engine and the compute performance of the CPUs,” he said. “Then you look at the power, because if you don’t have the performance, it really doesn’t matter.”  If so, why are the two giants fighting tooth and nail to one-up on each other with their SoC specs? The answer is that, while they are still in an early phase of the autonomous vehicle technology battle, neither company wants it said that their solution consumes too much power or is less capable than their rival.  This escalating specsmanship, however, speaks volume about Intel’s commitment to muscle back into the automotive chip market, noted McGregor.  McGregor reminded us that Intel was once the dominant supplier of microcontrollers to Ford Motors. Intel’s 8061 microcontrollers and its derivatives were reportedly used in almost all Ford automobiles built from 1983 to 1994. When Ford transitioned from Intel microcontrollers to Motorola, Intel lost presence in the automotive market. By 2005, Intel announced it would discontinue production of all its automotive microcontroller chips.  Now, Intel is counting on its $15 billion Mobileye acquisition to redeem itself in the automotive world.  L4/L5 cars need how much TOPS?  Nvidia insisted that a genuine apples-to-apples comparison on the power efficiency of the two chips can be only made when Xavier and EyeQ5 are compared at 24 TOPS, assuming that both were designed on a 7nm process node.  Based on these parameters (24TOPS at 7nm), Shapiro calculated, “With our current Xavier product, 24TOPS would consume approximately 12W of power for just the DLA (deep learning acceleration) +GPU.” He added, “If we move from the current 16nm to 7nm, we expect power to be reduced by approximately 40 percent, so that would put Xavier at about 7W.”  The Nvidia executive concluded, “Hence the accurate chart would be: at 24 TOPs performance, EyeQ 5 is at 10W and Nvidia Xavier is at 7W.”  Demler doesn’t buy such an argument.  The debate, he said, isn’t about an SoC-to-SoC spec comparison, but the actual performance necessary for L4/L5 autonomous cars. On one hand, “Intel/Mobileye claim their 24 TOPs is sufficient,” said Demler. On the other hand, Nvidia is “building Pegasus [platform] to max out at 320 TOPS.” At issue are, said Demler: “Who has more L4/L5 engagements? How much performance will we need?” Demler said, “Nobody knows, even if Mobileye thinks they do.”  Moving targets  When Nvidia originally announced the Xavier chip, the company quoted 20TOPS at 20W. But now, Nvidia is saying it’s 30TOPS at 30W. What happened?  While acknowledging its originally announced spec, Shapiro explained, “if we cranked the clocks we can scale to 30TOPS at 30W.”  Similarly, Intel today seems to quote an EyeQ 5 SoC spec very different from what was originally announced by Mobileye, before it was acquired by Intel.  EyeQ 5, according to Mobileye’s initial announcement, would have processing power of 12 Tera operations per second at power consumption below 5W. But last week when EE Times talked to Jack Weast, Intel's principal engineer and chief architect of autonomous driving solutions, he described EyeQ 5 delivering 24TOPS at 10W.  Asked when and how the SoC’s performance suddenly doubled, an Intel spokeswoman told us that Intel is now planning to deliver multiple EyeQ 5’s, “including the 12TOPs SKU announced previously and the 24TOPS SKU we compared to the Nvidia Xavier product.”  Pressed to explain if this means there are two cores integrated inside EyeQ5 24TOPS, or that Intel is using two chips inside a system, the Intel spokeswoman declined to elaborate. She said, “I can’t comment on how we came up with that level of TOPS yet. We’ll have more details on architecture and platform going into CES.”  Meanwhile, Demler was told by Mobileye that “to support Level 4/5, they will use two EyeQ 5s.”  The analyst community is in agreement that the target of Mobileye-designed EyeQ 5 performance is too low compared to Nvidia’s Xavier. Demler described the two EyeQ5’s as “just the ‘eyes’ of the system.” In Intel’s upcoming platform, the brains will come from Atom SoCs.  “EyeQ has relatively weak old MIPS CPUs compared to Nvidia’s custom ARMv8 CPUs (eight of them in Xavier),” noted Demler. “Those custom ARM cores deliver very powerful ‘brains,’ so you don’t need an Atom or Xeon. Therefore, Intel shouldn’t deny that its ‘PC’ processors come into play.”  In short, Demler agreed with Nvidia’s Shapiro, saying that Nvidia’s Xavier offers DLA and GPU, while EyeQ5 provides “specialized computer-vision cores.”  So, if Xavier vs. EyeQ5 is an apples-to-oranges comparison, as many industry observers pointed out, what should be used for more accurate comparison?  Demler took a shot by suggesting to “match up Drive PX with the equivalent of Intel’s Go system.” Intel’s Go system is a development platform for autonomous driving.  “Remove the FPGAs Intel originally described and drop in EyeQ processors. That’s what they’re building for BMW,” Demler surmised. “So let’s say a minimum performance dual-EyeQ5 + Atom = ~50 watts (just a rough estimate) with 40 TOPs of neural network performance, compared to dual Xavier at 60TOPS/60Watts. That’s a more apples/apples comparison. You can’t leave the Intel brain chip out of the equation.”  Power efficiency  Phil Magney, founder and principal advisor for Vision Systems Intelligence (VSI), told EE Times that both Intel and Nvidia “are making a bigger deal out of this than may be necessary.”  While all this specsmanship was originally about the power efficiency of the rival SoCs, Magney said, “I would expect most L4/L5 vehicles are going to have electric powertrains so the wattage may be a moot point.”  He observed that when Tesla switched from Mobileye to Nvidia with autopilot, “we saw no noticeable degradation in range. So electric powertrains will handle this without much impact.”  In Magney's opinion, “The same could be said for 48-volt architectures and mild hybrids since they have the capacity to handle greater loads. On the other hand, a traditional ICE would take a hit but this hit would be at the expense of fuel efficiency and emissions.”  Nvidia’s Shapiro disagreed. He acknowledged that power consumption does matter. “If a robotic car is expected to drive for nine hours per day, you want to be as energy efficient as possible, because it will eventually affect the EV’s range."  Noting that fully automated vehicles “will have redundancies in most of the systems including the AV systems,” Magney estimated, “Couple two of everything and that adds to massive loads on the electrical systems.” However, he remains more optimistic. “Battery technologies and battery management are improving and so too are ranges so this problem gets solved in time.”
Release time:2017-12-07 00:00 reading:1268 Continue reading>>

Turn to

/ 1

  • Week of hot material
  • Material in short supply seckilling
model brand Quote
CDZVT2R20B ROHM Semiconductor
BD71847AMWV-E2 ROHM Semiconductor
TL431ACLPR Texas Instruments
RB751G-40T2R ROHM Semiconductor
MC33074DR2G onsemi
model brand To snap up
BP3621 ROHM Semiconductor
TPS63050YFFR Texas Instruments
BU33JA2MNVX-CTL ROHM Semiconductor
IPZ40N04S5L4R8ATMA1 Infineon Technologies
STM32F429IGT6 STMicroelectronics
ESR03EZPJ151 ROHM Semiconductor
Hot labels
ROHM
IC
Averlogic
Intel
Samsung
IoT
AI
Sensor
Chip
About us

Qr code of ameya360 official account

Identify TWO-DIMENSIONAL code, you can pay attention to

AMEYA360 mall (www.ameya360.com) was launched in 2011. Now there are more than 3,500 high-quality suppliers, including 6 million product model data, and more than 1 million component stocks for purchase. Products cover MCU+ memory + power chip +IGBT+MOS tube + op amp + RF Bluetooth + sensor + resistor capacitance inductor + connector and other fields. main business of platform covers spot sales of electronic components, BOM distribution and product supporting materials, providing one-stop purchasing and sales services for our customers.

Please enter the verification code in the image below:

verification code