Cadence, Imec Disclose 3-nm Effort

发布时间:2018-03-01 00:00
作者:Ameya360
来源:Rick Merritt
阅读量:1016

  SAN JOSE, Calif. — Cadence Design Systems and the Imec research institute disclosed that they are working toward a 3-nm tapeout of an unnamed 64-bit processor. The effort aims to produce a working chip later this year using a combination of extreme ultraviolet (EUV) and immersion lithography.

  So far, Cadence and Imec have created and validated GDS files using a modified Cadence tool flow. It is based on a metal stack using a 21-nm routing pitch and a 42-nm contacted poly pitch created with data from a metal layer made in an earlier experiment.

  Imec is starting work on the masks and lithography, initially aiming to use double-patterning EUV and self-aligned quadruple patterning (SAQP) immersion processes. Over time, Imec hopes to optimize the process to use a single pass in the EUV scanner. Ultimately, fabs may migrate to a planned high-numerical-aperture version of today’s EUV systems to make 3-nm chips.

  The 3-nm node is expected to be in production as early as 2023. TSMC announced in October plans for a 3-nm fab in Taiwan, later adding that it could be built by 2022. Cadence and Imec have been collaborating on research in the area for two years as an extension of past efforts on 5-nm devices.

  “We made improvements in our digital implementation flow to address the finer routing geometry … there definitely will be some new design rules at 3 nm,” said Rod Metcalfe, a product management group director at Cadence, declining to provide specifics. “We needed to get some early visibility so when our customers do 3 nm in a few years, EDA tools will be well-defined.”

  Besides the finer features, the first two layers of 3-nm chips may use different metalization techniques and metals such as cobalt, said Ryoung-han Kim, an R&D group manager at Imec. The node is also expected to use new transistor designs such as nanowires or nanosheets rather than the FinFETs used in today’s 16-nm and finer processes.

  “Our work on the test chip has enabled interconnect variation to be measured and improved and the 3-nm manufacturing process to be validated,” said An Steegen, executive vice president for semiconductor technology and systems at Imec, in a press statement.

  The research uses Cadence Innovus Implementation System and Genus Synthesis tools. Imec is using a custom 3-nm cell library and a TRIM metal flow. The announcement of their collaboration comes one day after Imec detailed findings of random defectsimpacting 5-nm designs.

(备注:文章来源于网络,信息仅供参考,不代表本网站观点,如有侵权请联系删除!)

在线留言询价

相关阅读
Cadence unveils Tensilica Vision Q6 DSP IP
  Cadence Design Systems has unveiled the Cadence Tensilica Vision Q6 DSP, its latest DSP for embedded vision and AI that has been built on a new, faster processor architecture.  The fifth-generation Vision Q6 DSP offers 1.5X greater vision and AI performance than its predecessor, the Vision P6 DSP, and 1.25X better power efficiency at the Vision P6 DSP’s peak performance and is being targeted at embedded vision and on-device AI applications in the smartphone, surveillance camera, automotive, augmented reality (AR)/virtual reality (VR), drone and robotics markets.  With a deeper, 13-stage processor pipeline and system architecture designed for use with large local memories the Vision Q6 DSP is able to achieve 1.5GHz peak frequency and 1GHz typical frequency at 16nm, in the same floorplan area as the Vision P6 DSP. As a result, designers will be able by using the Vision Q6 DSP to develop high-performance products that meet both vision and AI demands as well as power-efficiency needs.  Among additional features and capabilities an enhanced DSP instruction set provides up to 20 percent fewer cycles than the Vision P6 DSP for embedded vision applications/kernels such as Optical Flow, Transpose and warpAffine, and commonly used filters such as Median and Sobel, while 2X system data bandwidth with separate master/slave AXI interfaces for data/instructions and multi-channel DMA addresses memory bandwidth challenges in vision and AI applications. It also reduces latency and overhead associated with task switching and DMA setup.  Crucially the Q6 DSP provides backwards compatibility with the Vision P6 DSP, meaning that customers will be able to preserve their software investment for an easy migration.  The Vision Q6 DSP supports AI applications developed in the Caffe, TensorFlow and TensorFlowLite frameworks through the Tensilica Xtensa Neural Network Compiler (XNNC). The XNNC maps neural networks into executable and highly optimised high-performance code for the Vision Q6 DSP, leveraging a comprehensive set of optimised neural network library functions.  The Vision Q6 DSP also supports the Android Neural Network (ANN) API for on-device AI acceleration in Android-powered devices. The software environment also features complete and optimised support for more than 1,500 OpenCV-based vision and OpenVX library functions, enabling fast, high-level migration of existing vision applications.  The Vision P6 DSP and the Vision Q6 DSP are designed for general-purpose embedded vision and on-device AI applications requiring performance ranging from 200 to 400 GMAC/sec. With its 384 GMAC/sec peak performance, the Vision Q6 DSP is well suited for high-performance systems and applications. The Vision Q6 DSP can be paired with the Vision C5 DSP for applications requiring greater than 384 GMAC/sec AI performance.  Select customers are integrating the Vision Q6 DSP in their products, and it is now available to all customers.
2018-04-13 00:00 阅读量:1102
Cadence: Last Holdout for Vision + AI ProgrammabilityCadence: Last Holdout for Vision + AI Programmability
  Cadence Design Systems, Inc. might have found the secret recipe for success in an increasingly hot AI processing-core market by promoting a suite of DSP cores that accelerate both embedded vision and artificial intelligence.  The San Jose-based company is rolling out on Wednesday (April 11) the Cadence Tensilica Vision Q6 DSP. Built on a new architecture, the Vision Q6 offers faster embedded vision and AI processing than its predecessor, Vision P6 DSP, while occupying the same floorplan area as that of P6.  The Vision Q6 DSP is expected to go into SoCs that will drive such edge devices as smartphones, surveillance cameras, vehicles, AR/CR, drones, and robots.  The new Vision Q6 DSP is built on Cadence’s success with Vision P6 DSP. High-profile mobile application processors such as HiSilicon’s Kirin 970 and MediaTek’s P60 both use the Vision P6 DSP core.  Among automotive SoCs, Dream Chip Technologies is using four Vision P6 DSPs. Geo Semiconductor’s GW5400 camera video processor has adopted Vision P5 DSP.  Mike Demler, senior analyst, The Linley Group, told EE Times that where Vision Q6 DSP differs from its competitors is “its multi-purpose programmability.” Among all computer-vision/neural-network accelerators on the market today, Demler noted, “Cadence is the last holdout for a completely programmable multipurpose architecture. They go for flexibility over raw performance.”  Demler added that Vision Q6 DSP is “comparable to the earlier Ceva XM4 and XM6, also DSP-based. But those cores add a dedicated multiplier-accumulator (MAC) array to accelerate convolution neural networks (CNNs).” He observed that Synopsys started with a CPU-MAC combination in its EV cores, but moved on to a CPU-DSP-MAC accelerator combo in the EV6x. Ceva went to a more special-purpose accelerator architecture in NeuPro, which looks more like Google’s TPU. Demler said, “Ceva’s NeuPro has much higher neural-network performance, but so do most of the other IP competitors. It’s a growing list now with Nvidia’s open-source NVDLA, Imagination, Verisilicon, Videantis, and others.”  Vision + AI strategy  Thus far, Cadence is sticking to its original strategy of vision + AI on a single DSP core.  Demler believes that “SoC providers are seeing an increased demand for vision and AI processing to enable innovative user experiences like real-time effects at video-capture frame rates.”  Indeed, Lazaar Louis, senior director of product management and marketing for Tensilica IP at Cadence, explained that more embedded vision applications have begun leveraging AI algorithms. Meanwhile, some AI functions improve when better vision processing comes first, he added.  AI-based face detection is an example. By capturing a face in varying multiple resolutions first, AI can detect it better. Meanwhile, to offer a vision feature like “bokeh” with a single camera, AI first performs segmentation, followed by blurring and de-blurring in the vision operation. Both applications demand the mix of vision and AI operations, and Cadence’s DSPs can put both operations in the camera pipeline, explained Louis.  More significantly, though, Cadence is hoping to use its well-proven vision DSP as a “Trojan horse” to open the door to design wins in present and future SoCs expected to handle more AI processing, acknowledged Louis.  On one hand, Cadence has both Vision P6 DSP and Vision Q6 DSP, designed to enable general-purpose embedded vision and more vision-related on-device AI applications. On the other, Cadence has a standalone AI DSP core, the Vision C5, which offers more “broad-stroke AI,” according to Louis, for always-on neural network applications.  While the Vision P6 and the Vision Q6 are used for applications requiring AI performance ranging from 200 to 400 GMAC/sec, the Vision Q6 DSP can be paired with the Vision C5 DSP for applications requiring greater than 384-GMAC/sec AI performance, according to Cadence.  Q6 advantages  The new Q6 comes with a deeper, 13-stage processor pipeline and system architecture designed for use with large local memories. It enables the Vision Q6 DSP to achieve 1.5-GHz peak frequency and 1-GHz typical frequency at 16 nm in the same floorplan area as the Vision P6 DSP, according to Cadence. As a result, designers using the Vision Q6 DSP can develop high-performance products that meet increasing vision and AI demands and power-efficiency needs.  But what sorts of applications are driving the vision and AI operations to run faster?  In mobile applications, Louis said that users want to apply “beautification” features not just to still photos, but also to video. “That demands higher speed,” he said. In AR/VR headsets, simultaneous localization and mapping (SLAM) and imaging processing demand decreased latency. System designers also want to use AI-based eye-tracking so that an AR/VR headset can render on its screen one object in focus, the rest blurry, rather than a screen with various focal points, which could create visual conflicts. Such applications also need much higher processing speed, added Louis.  More surveillance cameras are now designed to do “AI processing on device,” according to Louis, rather than sending captured images to the cloud. When such a camera sees a person at the door, the device there decides who it is, detects any anomaly, and puts out an alert. Again, this demands more powerful vision and AI processing, concluded Louis.  Deeper pipeline, new ISA, software frameworks  The Vision Q6’s deeper pipeline comes with a better branch prediction mechanism, overcoming branch overhead, according to Cadence. The Q6 also comes in a new instruction set architecture. It provides additional enhanced imaging, computer vision, and AI performance. For example, there are up to 2x performance improvements for imaging kernels on Vision Q6 DSP, claimed Louis.  It’s important to note that the Q6 is backward-compatible with Vision PG DSP. It provides separation of scale and vector execution, which results in higher-scale performance, according to Cadence.  Beyond the hardware enhancements, Louis stressed Cadence’s commitment to support a variety of deep-learning frameworks. The Vision Q6, for example, comes with Android Neural Network support, enabling on-device AI for Android platforms.  Louis also pointed out that the Q6 DSP now offers custom layer support. When customers devise unique innovations to augment standard networks, “we can support them,” he said. The Vision Q6 DSP extends broad support for various types of neural networks for Classification (i.e., MobileNet, Inception, ResNet, VGG), Segmentation (i.e., SegNet, FCN) and Object Detection (i.e., YOLO, RCNN, SSD).  Boasting the company’s full ecosystem of software frameworks and compilers for all vision programming styles, Louis noted that the Q6 supports AI applications developed in the Caffe, TensorFlow, and TensorFlowLite frameworks through the Tensilica Xtensa Neural Network Compiler (XNNC). The XNNC maps neural networks into executable and highly optimized high-performance code for the Vision Q6 DSP, leveraging a comprehensive set of optimized neural network library functions.  The Vision Q6 DSP is now available to all customers. Select customers are integrating the new DSP core in their products, according to Cadence.
2018-04-12 00:00 阅读量:1298
  • 一周热料
  • 紧缺物料秒杀
型号 品牌 询价
BD71847AMWV-E2 ROHM Semiconductor
MC33074DR2G onsemi
TL431ACLPR Texas Instruments
RB751G-40T2R ROHM Semiconductor
CDZVT2R20B ROHM Semiconductor
型号 品牌 抢购
ESR03EZPJ151 ROHM Semiconductor
BP3621 ROHM Semiconductor
BU33JA2MNVX-CTL ROHM Semiconductor
STM32F429IGT6 STMicroelectronics
IPZ40N04S5L4R8ATMA1 Infineon Technologies
TPS63050YFFR Texas Instruments
热门标签
ROHM
Aavid
Averlogic
开发板
SUSUMU
NXP
PCB
传感器
半导体
相关百科
关于我们
AMEYA360微信服务号 AMEYA360微信服务号
AMEYA360商城(www.ameya360.com)上线于2011年,现 有超过3500家优质供应商,收录600万种产品型号数据,100 多万种元器件库存可供选购,产品覆盖MCU+存储器+电源芯 片+IGBT+MOS管+运放+射频蓝牙+传感器+电阻电容电感+ 连接器等多个领域,平台主营业务涵盖电子元器件现货销售、 BOM配单及提供产品配套资料等,为广大客户提供一站式购 销服务。