SANTA CLARA, Calif. — Vendors and researchers are making significant progress applying machine learning to the thorny issues of chip design, according to a panel at DesignCon here. The use of AI in EDA was a hot topic that drew a standing-room-only crowd to the panel and spawned several papers at the event.
Over the past year, the Center for Advanced Electronics through Machine Learning (CAEML) has gained four new partners. The team of 13 industry members and three universities has expanded both the breadth and depth of its work.
“Last year, we focused mainly on signal integrity and power integrity, but this year, we diversified our portfolio into system analysis, chip layout, and trusted platform design — so the diversity of the research has made the most progress,” said Christopher Cheng, a distinguished technologist at Hewlett-Packard Enterprise and a member of CAEML.
“Work in Bayesian optimizations and convolutional neural networks in design-for-manufacturing have both advanced significantly in the capabilities we have demoed, and we’re starting to think about use of in-line learning in the design process,” said Paul Franzon, a distinguished professor at North Carolina State University, one of the group’s three host colleges.
“One of the challenges we face is getting access to data from companies,” said Madhavan Swaminathan, a professor at Georgia Institute of Technology, another CAEML host. “Most of their data is proprietary, so we’ve come up with several mechanisms to handle it. The processes are working fairly well, but they are more lengthy than we’d like.”
The group had a sort of coming-out party for itself at this event last year. It started with backing from nine vendors including Analog Devices, Cadence, Cisco, IBM, Nvidia, Qualcomm, Samsung, and Xilinx. Its initial interest areas included high-speed interconnects, power delivery, system-level electrostatic discharge, IP core reuse, and design rule checking.
EDA vendors such as Cadence Design Systems started following research in machine learning back in the early 1990s. The techniques first made their way into its products in 2013 with a version of Virtuoso that used analytics and data mining to create machine-learning models for parasitic extraction, said David White, a senior director of R&D at Cadence.
To date, Cadence has shipped more than 1.1 million machine-learning models for its tools to speed lengthy calculations. The next phase of product development is in place-and-routing tools that learn from human designers to recommend optimizations that speed turnaround time. The solutions may use a combination of local and cloud-based processing to harness parallel systems and large data sets, said White.
At advanced process nodes, global routing tools are hitting limits with their current algorithms, reducing chip data rates in their efforts to get timing closure, said Sashi Obilisetty, an R&D director at Synopsys.
For its part, TSMC reported last year a 40-MHz speed gain using machine learning to predict global routing, she noted, adding that Nvidia is already using machine learning to provide full coverage on chip designs while reducing simulations.
Panelists said that they see many opportunities for using a basket of machine-learning techniques to automate specific decisions and optimize overall design flows.
Specifically, researchers are exploring opportunities to replace today’s simulators with AI models that run more than an order of magnitude faster. Relatively slow simulators can lead to timing errors, mistuned analog circuits, and insufficient modeling that results in chip re-spins, said Swaminathan of Georgia Tech. In addition, machine learning can replace IBIS for behavioral modeling in high-speed interconnects.
Chip researchers are using data mining, statistical learning, and other tools in addition to the neural networking models popularized by Amazon, Google, and Facebook image searches and voice-recognition services.
Franzon of North Carolina State reported on the use of surrogate models to get to a final physical design optimization in four iterations, compared to 20 for an engineer. Similar techniques were used to calibrate analog circuits and to set up transceivers for multichannel interconnects.
AI can help automate the process by setting dozens of options in EDA tools, sometimes called knobs. “The tools have set up knobs with sometimes obtuse meanings that have vague relationships to desired outcomes,” said Franzon.
For its part, HPE is using neural nets and hyperplane classifiers to predict failures of solid-state drives in the field based on data about their voltage, temperature, and current.
“The amount of training data required is high,” said Cheng. “So far, the classifiers are static, but we want to add the dimension of time using recurrent neural networks so that instead of just good/bad labels, we will have time-to-failure labels. And in the future, we want to extend this work to more parameters and general system failures.”
在线留言询价
型号 | 品牌 | 询价 |
---|---|---|
BD71847AMWV-E2 | ROHM Semiconductor | |
MC33074DR2G | onsemi | |
RB751G-40T2R | ROHM Semiconductor | |
TL431ACLPR | Texas Instruments | |
CDZVT2R20B | ROHM Semiconductor |
型号 | 品牌 | 抢购 |
---|---|---|
BP3621 | ROHM Semiconductor | |
TPS63050YFFR | Texas Instruments | |
IPZ40N04S5L4R8ATMA1 | Infineon Technologies | |
STM32F429IGT6 | STMicroelectronics | |
BU33JA2MNVX-CTL | ROHM Semiconductor | |
ESR03EZPJ151 | ROHM Semiconductor |
AMEYA360公众号二维码
识别二维码,即可关注