Automotive tier one Visteon Corp. unveiled the company’s first technology platform for autonomous driving solutions, dubbed DriveCore, at the Consumer Electronics Show.
The DriveCore platform is complete with hardware, middleware and frameworks to develop machine-learning algorithms for autonomous driving applications of Level 3 and above. The DriveCore provides OEMs with a scalable centralized domain controller to offer computing power that matches carmakers’ needs.
Unlike a “vision-first” Intel/Mobileye platform, DriveCore is flexible, said Visteon, as it offers integration of a variety of sensor data coming from multiple cameras, lidars and radars.
Additionally, Visteon (Van Buren Township, Mich.) has developed a PC-based development environment called Studio. It's designed to enable third parties to create machine-learning algorithms. The big difference, however, between Visteon’s AV solution and the competition is its partnership with a Mountain View, Calif.-based startup called DeepScale.
The startup has developed a deep learning perception technology that ingests raw data, not object data, and accelerates its sensor fusion on an embedded processor.
With robocar development still in its infancy, tech companies like Waymo, Uber and Intel/Mobileye often cut out the middleman [tier ones], in favor of developing a complete automated vehicle (AV) platform of their own. Visteon’s announcement, however, represents what appears to be an emerging trend of tier ones striking back.
a
“I have been waiting for this,” said Phil Magney, founder and principal advisor of VSI Labs. “I knew Visteon was shopping around for a Deep Learning partner for some time.”
Magney observed that Visteon, as a tier one, is a bit of laggard behind other tier ones that have announced partnerships and AV solutions. But with the DriveCore, he said, Visteon now has “pretty complete solution that allows OEMs to mix and match sensors depending on the applications they are developing.”
He also added, “At the center of this solution is Deepscale who offers the software components and fully trained CNN-based algorithms.”
Forrest Iandola, founder and CEO of DeepScale, declined to comment on the commercial value of the agreement with Visteon. However, he made it clear that the partnership is solid. Iandola said, “We have worked with Visteon to align our product roadmaps with plans to integrate, test and deploy our products with OEMs.”
DeepScale brings unique AI offerings to the partnership, plus sensor fusion and DNN scalability to smaller processors, the company noted. DeepScale’s AI software “constructs 3D perception models of the environment from any combination of sensors, starting from a single camera, producing a high-resolution point-cloud that historically required expensive lidar,” the company said. DeepScale reportedly “specializes in tiny DNNs” requiring less computing while producing state-of-the-art performance.
Magney called Visteon/DeepScale “a good match.” Without a tier one who has the chops to tackle automotive requirements, Deepscale would struggle to get its solutions to market, Magney predicted. “With Deepscale, Visteon gets a strong partner on the deep learning software side,” he added. “Deepscale will provide the pre-trained software algorithms to handle all levels of automation from L2 to L4/L5.”
DeepScale takes pride in its ability to offer scalable solutions.
Asked about two polarizing factions emerging in the AV industry world — Intel/Mobileye model taking an evolutionary path vs. Waymo and Uber jumping in the ride-hailing/fleet business — Iandola told us, “We have engagements with both types of players: groups that are evolving driver-assistance into fully-autonomous driving, and groups that are working solely on full-autonomy.” He stressed, “We have focused on developing perceptual systems that scale from cost-constrained and power-constrained mass-produced hardware for driver-assistance, to more exotic hardware that will help to enable full-autonomy.”
Hardware choices
Visteon’s launch of the AV platform offers a few clues on the evolving AV industry.
First, some AV companies are showing a big appetite for an open system. As VSI Labs’ Magney pointed out, “It’s interesting Visteon says its AV stack supports processor architectures from Nvidia, NXP and Qualcomm with more to come later.” Essentially, Visteon, with its scalable DriveCore platform, hopes to offer a much-needed choice for OEMs — both for processors and sensors.
Second, carmakers are looking to use various sensor technologies in their autonomous vehicles.
DeepScale’s Iandola said, “Our view is that each type of sensor has something unique to offer, and the sensors are complementary. I think the notion of a single type of ‘master sensor’ doesn't make sense when the goal is to create safe and reliable autonomous driving systems.”
He emphasized, “There has been over a billion dollars invested in new sensor technologies, and over 30 new startups focused on sensor technology. Billions of dollars are also being invested in new AI processing chips and platforms.” Iandola said his company sees “a great opportunity in creating perceptual systems that can easily integrate the best-of-breed of new sensor technologies and best-of-breed of processor platforms.”
Raw data vs. object data
Third, the industry’s debate over whether to use raw or object data for sensor fusion might be shifting.
Magney said, “The trend is toward raw data fusion on the perception side and this is what Deepscale supports.”
Today, however, there is no consensus in the industry on raw vs. object.
Consider Uber, whose self-driving vehicle appears to depend on a combination of both. The company is reportedly using some neural nets to help with perception (turning sensor data into object data). Some of this is done by "fusing" multiple sensors. Uber is also using other neural nets in later stages to predict what the car should do next, for example.
DeepScale’s Iandola is clear on what he believes needs to be done.
He told us in a previous interview, “A good chunk of research on deep neural networks (DNN) today is based on tweaks or modifications of off-the shelf DNN.” Over at DeepScale, however, he said, “We’re starting from scratch in developing our own DNN by using raw data — coming from not just image sensors but also radars and lidars.”
Magney called DeepScale’s approach “very contemporary,” representing “the latest thinking in applying AI to automated driving.”
DeepScale is advocating early raw data fusion that is located closer to the sensors.
Magney also sees an inherent advantage in DeepScale’s modular approach. “You can do the fusion with any mix of sensors. For lower levels, that’d be camera and radar, and for higher levels would include Lidar.” He concluded, “With Deepscale I believe Visteon will be able to offer a range of solutions from ADAS all the way to L4/L5.”
在线留言询价
型号 | 品牌 | 询价 |
---|---|---|
TL431ACLPR | Texas Instruments | |
MC33074DR2G | onsemi | |
RB751G-40T2R | ROHM Semiconductor | |
BD71847AMWV-E2 | ROHM Semiconductor | |
CDZVT2R20B | ROHM Semiconductor |
型号 | 品牌 | 抢购 |
---|---|---|
BP3621 | ROHM Semiconductor | |
ESR03EZPJ151 | ROHM Semiconductor | |
BU33JA2MNVX-CTL | ROHM Semiconductor | |
TPS63050YFFR | Texas Instruments | |
IPZ40N04S5L4R8ATMA1 | Infineon Technologies | |
STM32F429IGT6 | STMicroelectronics |
AMEYA360公众号二维码
识别二维码,即可关注