How To Choose The Right Memory

发布时间:2018-04-03 00:00
作者:Ameya360
来源:semiengineering
阅读量:1175

  When it comes to designing memory, there is no such thing as one size fits all. And given the long list of memory types and usage scenarios, system architects must be absolutely clear on the system requirements for their application.

  A first decision is whether or not to put the memory on the logic die as part of the SoC, or keep it as off-chip memory.

  “The tradeoff between latency and throughput is critical, and the cost of power is monstrous,”,” said Patrick Soheili, vice president of business and corporate development at eSilicon. “Every time you move from one plane to another, it’s a factor of 100X. That applies to on-chip versus off-chip memory, as well. If you can connect it all together one one chip, that’s always the best.”

  For that reason and others, the first choice of chipmakers is to put as much RAM or flash on the logic die. But in most cases, it’s not enough. Even microcontrollers, which in the past were defined as processing elements with on-chip memory, have begun adding off-chip supplemental memory for higher-end applications.

  “When the size of memory on the logic die exceeds what can be produced economically, then off-chip memory is the obvious choice,” observed Marc Greenberg, group director of product marketing for DDR, HBM, flash/storage and MIPI IP at Cadence. “There’s a vibrant array of low-cost, low-power memories based on the SPI (Serial Peripheral Interface) bus of several types from several manufacturers, including memories with automotive speed grades. The SPI bus is speeding up and adding width.”

  In fact, Cadence is seeing a lot of demand for 200MHz Octal-SPI IP interfaces — both controllers and PHYs.

  To better understand how people are looking at the power design for memories in automotive or other applications, it helps to take a step back and try to understand what the problem statement is in terms of the overall bandwidth for speed and power consumption in memories, noted Navraj Nandra, senior director of marketing for the DesignWare Analog and MSIP Solutions Group at Synopsys. “What’s happening in terms of the application requirements is there are people pushing the microprocessor/CPU performance, and that is requiring memory capacity and memory bandwidth. But you can’t have both at the same time even though that’s what the applications demand.”

  The critical tradeoffs in memory are bandwidth, latency, power consumption, capacity and cost. Engineers sometimes forget about the cost part, but it drives a lot of decision points of implementation, Nandra said.

  The capacity versus the speed of the memories must be considered, and each type of memory has different trade-offs. For example, if the application is driven by speed or bandwidth gigabits per second, thenHBM may be the way to go because it has much higher bandwidth per pin than a DDR memory. If the application is dominated by capacity issues, such as how many gigabytes of storage can be accommodated on the memory interface, then DDR may be a better option.

  “DDR gives the capacity and HBM gives the bandwidth,” Nandra said. “If the question is about power consumption, it’s better with something like a low power DDR compared to say HBM or GDDR. With GDDR or HBM you get performance. With DDR and LPDDR you really get power savings.”

  Embedded memory

  If the memory is embedded as part of an SoC, there are a number of architectural considerations.

  “If you look at leakage, for example, the leakage of SRAM is predominantly bit cells,” said Arm Fellow Rob Aitken. “The periphery contributes, but you can fool around with it in the design process. If you have a certain number of bit cells, you’re going to have a certain leakage, so you have to start at that point and work around it. Depending on how ambitious you get, there are circuit design tricks that will let you get rid of some of that, usually at the expense of performance. Some of these include well biasing or power gating of various descriptions, and combinations of these help to save on leakage.”

  To this point, it’s important to understand that if a certain number of bits are needed, and there is this much leakage, the system architect has to figure out what configuration works best for the various memories they have. “Considerations include things like the bit line length, said Aitken. “The shorter the bit line, in general, the faster the memory. This is because the way SRAM sensing works, essentially the individual bit cell has to discharge the bit line, and when it has discharged it by enough, then the sense amplifier fires and says, ‘Oh, there’s a signal there.’ So the more bit line it has to discharge, the longer it takes.”

  Fortunately, for any given memory configuration, choices can be made within a band of possibilities.

  “I can have a memory with a lot of short bit lines, or a small number of longer bit lines, and that’s still the exact same number of words and bits,” he explained. “It’s just the way the columns are arranged in the memory. So there’s this architecture level playing around that can be done, and memory generators let you do that. You can play around and say, ‘In this case I’d like to have column mux 8 for this,’ which is 8 bit lines going to each output bit because that gives a nice balance of speed and power. Or you might say that’s actually faster than you need, so you can go with a column mux of 4 because it gives better power and okay speed. You wind up going through that exercise as an SoC architect to see what’s the best way to implement things, and they also fit into the floorplan differently because some of them are more square, and some of them are more rectangular.”

  Cost matters

  While power has dominated many design decisions for some time, cost is another critical element in this equation.

  “Just from the memory perspective, you’re looking at whether you want to have more area, which equates to a higher cost,” said Aitken. “There are some second-order tradeoffs that become important with large amounts of memory, such as the ratio of bit cell area to periphery area. The bit cell area is essentially fixed once you pick one. But the periphery area around it can get bigger or smaller, which generally makes it faster or lower power. When you do that, it adds cost to the SoC, which may or may not show up depending on how many of a given instance you have. Often when you look at an SoC, there are a few very large instances that dominate the area, so small changes in periphery area or performance have a huge impact on those. There are a bunch of instances that really don’t matter because if they doubled in size nobody would notice. And then there’s another often small set of architecturally significant instances where these things have some sort of ultimate performance requirements or ultra-low voltage or some aspect of your chip that is important. In those cases, the area cost argument is less important typically than if it is meeting the speed criteria or leakage criteria or whatever else is dominant.”

  Automotive priorities

  This is particularly relevant in the automotive market, where cost is a critical element in deciding which components to use.

  “Power is important but it hasn’t shown up as a super critical factor in some other markets,” said Frank Ferro, senior director of product management at Rambus. “Power is always important to everyone, but the tradeoff in automotive systems is really cost versus bandwidth. If I had to rank them, I would say performance and price are neck and neck. Power would be a distant third.”

  Automotive is one of the hot markets for chip design today. For self driving cars, the number of sensors that are being deployed in the cars to get feedback about all the real-time information is exploding, and for a car to do different levels of driver assistance requires multiple sensors feeding into advanced logic. The amount of data that needs to be processed is enormous, because in the case of vision and radar this data is being streamed through the sensor network.

  “Chipmakers are looking at memory systems that can handle bandwidth greater than 100 Gbps and higher as you get into different levels of driver assisted cars, and ultimately self-driving cars,” said Ferro. “In order to do that, the number of memory choices starts to narrow down quite a bit in terms of what can provide you with the necessary bandwidth to process all that data that is coming in.”

  He said that some of the early ADAS system designs included both DDR4 and LPDDR4 because that was what was available at the time. Both have advantages and disadvantages. “The DDR4 is obviously the cheapest option available, and those are in the highest-volume productions,” he said. “They are certainly very cost-effective and very well understood. Doing error correction on DDR4 is simpler and well understood. LPDDR4 was also an option that was used, as well.”

  Going forward, Ferro expects a variety of memory types to coexist in different systems. “If they are heavily cost-driven, then they are going to be looking at something like DDR or maybe even LPDDR4. But if they are heavily bandwidth-driven, then they will be looking at something like HBM or GDDR. It’s really a function of where you are in your architecture stage. There are different ADAS levels and what’s required for the system and timeframe, too, because when you are shipping is important. If you are getting a system shipping this year, it would have a different solution than systems being developed for next year or the year after that. Those are all the things that we are seeing on the continuum of time-to-market versus cost.”

  Then, on the high performance side, the bandwidth-power tradeoff is the key challenge from a system-design standpoint. “How do you get more bandwidth to fit in a reasonable area on your chip with reasonable power? For example, if you have an HBM, it is very efficient from a power and area standpoint because it uses 3D stacking technology, so from a power efficiency point of view, HBM is fantastic. And from an area standpoint, one HBM stack takes up a relatively small amount of space, so that’s a really nice looking solution from a power performance perspective. You get great density, you get great power, low power, within a small area,” Ferro said.

  Others agree. “GDDR is faster than DRAM for GPUs, but with HBM there is no comparison,” said Tien Shiah, HBM product marketing manager atSamsung. “HBM is the fastest form of memory with a micropillar grid array. You can have 4- or 8-high stacks, which gives you 1,024 I/Os over 8 channels, with 128 bits per channel. That’s four times the I/O bus width of standard graphics cards. You can hit 2Gbps per pin, and it will be 2.4Gbps at 1.2 volts.”

  That’s enormous throughput for external memory. But here, too, the tradeoff is cost.

  “You are going to pay a little bit more for HBM, so if you can absorb the cost it is a great solution,” said Ferro. “If you can’t absorb the cost, then what other companies are looking at is how many DDRs or LPDDRs can they squeeze on a board and putting them side-by-side to try to mimic some of the HBM performance with a more traditional solution.”

  Making sense of memory

  Because the memory market serves many different applications, getting a clear picture of how to approach memory in a design can be tricky. Getting a sense of the various options can help.

  “You can basically take your silicon and actually break it into several key vertical markets,” said Farzad Zarrinfar, managing director of the IP Division at Mentor, a Siemens Business. Some vertical markets could be smartphone applications, high-performance computing, automotive, IoT, virtual reality, mixed reality, among others. What you will find is that the silicon technology varies. There isn’t one silicon technology that addresses everything. For example, IoT is very power-sensitive and cost-sensitive, and people take advantage of the most advanced technology in ultra-low-power 40nm or 28nm flavors like ULP or HPC+. Those are a fantastic fit for IoT. In automotive, there is a lot of demand in 28nm and going down.”

  The choice of memory compiler is another piece of the puzzle and most memory providers provide them for their memory products. “An intelligent compiler can be instrumental for providing a solution because it can optimize based on different requirements. For example, some applications may need ultra-low dynamic power, whereas automotive has its own requirements, and the only constant that we have there is change. Things are evolving. There are very clear requirements that exist for safety, temperature grades, among a number of other considerations,” Zarrinfar said.

  All of these requirements impact the memory design, which the engineering team needs to take into consideration.

  “When we design memory we have certain targets, which could be +125 degrees C ambient or 150 degrees C ambient, which translates to some junction temperature,” he said. “We have a marketing requirements document that is based on the target market. Then we know what kind of design we need to have. And then you need to have models from the semiconductor foundry that say this is the range of models that we have. Automotive temperatures are forcing the semiconductor foundries to increase the traditional operating temperature ranges. And while not every permutation of every model is supported in every process node and type, adequate verification must be done to make sure the various combinations will achieve the desired result.”

  At the end of the day, the insatiable demand for more bandwidth, less latency, lower power consumption, and more capacity at a lower cost is only expected to increase. With the number of memory types available today, both off-chip and embedded, the system architect must get better all the time at juggling the options for the ideal memory approach to their specific application.

(备注:文章来源于网络,信息仅供参考,不代表本网站观点,如有侵权请联系删除!)

在线留言询价

相关阅读
What is a memory chip?  What are the types of memory chips?
  Memory chips are the main components used for storage In the realm of computing and digital devices, and play a very important role in the entire integrated circuit market.  These chips serve as the foundation upon which our digital world operates, facilitating the storage and retrieval of information in devices ranging from smartphones and laptops to complex servers and embedded systems.  What is a memory chip?A memory chip, fundamentally an integrated circuit (IC), is a crucial electronic component designed to store, retrieve, and manage data within a digital device. These chips come in various types and configurations, each tailored to serve specific purposes within electronic systems.  What are the types of memory chips?RAM (Random Access Memory): One of the most common types of memory chips, RAM is volatile memory used by computers to temporarily store data that the CPU needs quick access to during operation. It enables swift read and write operations, facilitating multitasking and overall system performance.  ROM (Read-Only Memory): Unlike RAM, ROM is non-volatile memory, meaning it retains data even when the power is turned off. ROM is commonly used to store firmware and permanent instructions essential for booting up devices and initializing hardware components.  Flash Memory: This non-volatile memory type finds its application in devices like USB drives, Solid State Drives (SSDs), memory cards, and embedded systems. Flash memory allows for both reading and writing operations, making it suitable for storing files, applications, and operating systems.  EEPROM (Electrically Erasable Programmable Read-Only Memory): EEPROM combines the qualities of both volatile and non-volatile memory. It’s rewritable and often used in smaller capacities to store configuration settings and small amounts of essential data.  What are the applications of memory chips?The ubiquity of memory chips spans across an extensive array of applications and devices, playing a pivotal role in their functionality:  • Computers and Laptops: RAM enables quick access to data during computations, while ROM stores firmware and BIOS instructions essential for system startup.  • Smartphones and Tablets: Memory chips in these devices handle data storage for applications, media files, and the operating system, ensuring smooth multitasking and user experience.  • Digital Cameras and Camcorders: These devices utilize memory chips to store photos, videos, and settings, allowing users to capture and retain precious moments.  • Embedded Systems and IoT Devices: Memory chips facilitate the functioning of embedded systems and IoT devices, managing data crucial for their operations in various industries like healthcare, automotive, and home automation.  How to make a computer chip?The creation of a memory chip involves intricate processes conducted in specialized semiconductor fabrication plants. The process can be summarized in several key steps:  Design and Layout: Engineers meticulously design the chip’s layout, determining the arrangement and connections of transistors and circuits.  Lithography: A crucial step where the chip’s design is imprinted onto a silicon wafer using photolithography techniques.  Etching and Doping: Unwanted portions of the silicon wafer are removed, and specific regions are doped with materials to alter their conductivity and create the desired electronic components.  Layering: Multiple layers of conductive and insulating materials are deposited onto the wafer to form intricate circuitry.  Testing and Packaging: The fabricated chips undergo rigorous testing to ensure functionality and quality. Once validated, they are packaged into final products for integration into various devices.  What is the difference between a logic chip and memory chip?While both logic and memory chips are essential components of electronic systems, they serve distinct functions:  Logic Chip:  A logic chip is designed to perform computational tasks, execute instructions, and manage the flow of data within a digital device. These chips contain integrated circuits that implement logical operations, arithmetic calculations, and control functions. They are the brains of a system, carrying out operations based on instructions received from software or firmware.  Examples of logic chips include Central Processing Units (CPUs), Graphics Processing Units (GPUs), microcontrollers, and Application-Specific Integrated Circuits (ASICs). CPUs, for instance, process data, perform calculations, and execute instructions, while GPUs specialize in handling graphics-related tasks.  Memory Chip:  In contrast, a memory chip is specifically dedicated to storing and retrieving data. These chips don’t perform computational or logical operations but instead focus on holding information temporarily or permanently within a system. Memory chips are responsible for enabling the storage and retrieval of data for various purposes, such as program execution, data manipulation, or long-term storage.  Types of memory chips include Random Access Memory (RAM), Read-Only Memory (ROM), Flash Memory, and Electrically Erasable Programmable Read-Only Memory (EEPROM). RAM, for example, stores data temporarily while the system is running, allowing quick access for the CPU to carry out operations. ROM holds essential instructions and data that remain intact even when the power is turned off. Flash memory is used for non-volatile storage in devices like USB drives and SSDs, while EEPROM allows for rewritable non-volatile storage in smaller capacities.  How long does a memory chip last?  The longevity of memory chips varies based on usage, quality, and environmental factors. Under normal operating conditions, these chips can last for many years, potentially even decades. However, excessive usage, high temperatures, or voltage fluctuations may impact their lifespan.
2023-11-20 14:33 阅读量:1370
Shortage of Intel CPU to Impact Notebook Shipments, Causing Further Price Decline in the Memory Market
TrendForce has adjusted its 2018 global notebook shipments projection downwards due to the worsening shortage of Intel CPUs. Intel originally planned to begin mass production of CPUs based on its latest Whiskey Lake platform in 3Q18, when the notebook market would be in the busy sales season. However, PC-OEMs are now finding an insufficient supply of Whiskey Lake CPUs, which has disrupted vendors’ notebook shipment plans for this year’s second half. Therefore, TrendForce now estimates that this year’s total notebook shipments will drop by 0.2% YoY, and the CPU shortage may further impact the entire memory market as well.The precise reason behind the shortage of Intel CPUs is currently unclear because the problem simultaneously affects the newly arrived CPU product lines and product lines that have been in the market for some time. The affected products include the improved version of 14nm++ and product lines based on the 14nm+ Coffee Lake platform, which has been in mass production for half a year and is one of the solutions for mainstream models in the notebook market. The lack of supply for existing CPU product lines is having a significant impact on the notebook market as a whole. TrendForce estimates that the CPU supply gap in the notebook market has increased from around 5% in August to 5-10% in September. There is a possibility that the supply gap may extend to over 10% in 4Q18, and the shortage is expected to be resolved rather later in 1H19.DRAM prices to slide, with PC DRAM anticipating larger price decline due to the CPU shortageTrendForce notes that the CPU shortage is expected to impact the entire memory market as well. DRAM prices are now approaching an inflection point after climbing for nine successive quarters. DRAMeXchange, a division of TrendForce, previously estimated that the contract prices of PC DRAM products will drop by around 2% QoQ in 4Q18 as the market gradually shifts into oversupply. However, it is now possible that the price decline will become larger due the shortage of Intel CPUs and lower demand for notebooks and PC DRAM in a row.On the other hand, the impact of the CPU shortage may also expand to the NAND Flash market. The insufficient supply would cause PC OEMs’ lower expectations of notebooks demand in the upcoming busy season, thus affecting the demand for SSD in the second half of this year. Therefore, TrendForce expects the SSD price decline to be steeper in 4Q18 than in this quarter.With regard to the server market, the migration from the Grantley to the Purley platform is currently taking place. According to TrendForce’s survey, a minority of server OEMs are experiencing longer lead time in the deliveries of Purley processors. This issue will be followed up because it will affect future server shipments if it becomes more widespread. Moreover, both the NAND Flash and DRAM markets are much more vulnerable to the falling demand from the server application than from the PC or notebook application. Therefore, downward corrections of demand for server memory products will cause a steeper drop in quotations of memory products as a whole.
2018-09-13 00:00 阅读量:1263
China will take a 'long time' to catch up to memory chip rivals, industry expert says
Even as China doubles down on its efforts to dominate the memory chip industry, its domestic companies continue to lag behind the global competition, an industry executive told CNBC on Thursday.The semiconductor industry has been around for decades, but it is poised for another round of growth due to the emergence of new technologies like artificial intelligence and the fifth-generation of mobile networks, according to Lung Chu, president of SEMI China."There's a lot of opportunities in semiconductor," Chu told CNBC's Eunice Yoon at the Morgan Stanley Technology, Media and Telecom Conference in Beijing.In 2014, the Chinese government issued guidelines for the development of China's semiconductor industry through innovation and investments, he explained. "There's evidence that money has been put in but I think it will take a long time for China to catch up with the global leaders."SEMI is a global association for the semiconductor industry, and is present in countries such as the United States, China and South Korea.Last year, global semiconductor revenue topped $400 billion, according to research firms Gartner and IHS Markit. The latter said worldwide revenue grew 21.7 percent and reached $429.1 billion in 2017. The industry is dominated by the likes of Samsung Electronics, Intel, SK Hynix, and Qualcomm.Chu said China was responsible for more than half of the global consumption in the chip industry, yet Chinese local suppliers meet only about 13 percent of the domestic demand. According to Chu, that means Beijing has a massive trade deficit in this space."That's a major concern for the government, for the economy, but it's also a great opportunity for local companies to get into the (integrated circuits) business," he said. "That's the driving force as to why China wants to do more."As part of its Made in China 2025 goals, Beijing wants to have have locally produced chips used in smartphones to make up about 40 percent of the domestic market by 2025, in a move to cut down its reliance on imports. The country's dependency on foreign chips was highlighted when the United States imposed a supplier ban on Chinese telecommunications equipment maker ZTE, which crippled its business.China has raised multiple funds for semiconductor development since 2014, with contributions from government-backed enterprises and industry players to push local companies to develop their own chips that can rival the global competition, according to various reports. Beijing's involvement in the development of its semiconductor industry has been one of the main complaints from the U.S. over China's technology practices.Amid the ongoing trade war between China and the U.S., Chu said many companies in the semiconductor space are worried about investing in the Chinese market."However, because the market is here, the customer is here, the China strategy has to be part of the global strategy or the corporate strategy," he said.
2018-08-31 00:00 阅读量:1150
Memory ICs to Account for 53% of Total 2018 Semi Capex
IC Insights forecasts total semiconductor capital expenditures will rise to $102.0 billion this year, marking the first time that the industry has spent more than $100 billion on capital expenditures in one year.  The $102.0 billion spending level represents a 9% increase over $93.3 billion spent in 2017, which was a 38% surge over 2016.Figure 1 shows that more than half of industry capital spending is forecast for memory production—primarily DRAM and flash memory—including upgrades to existing wafer fab lines and brand new manufacturing facilities. Collectively, memory is forecast to account for 53% of semiconductor capital expenditures this year.  The share of capital spending for memory devices has increase substantially in six years, nearly doubling from 27% ($14.7 billion) in 2013 to a forecast of 53% ($54.0 billion) of total industry capex in 2018, which amounts to a 2013-2018 CAGR of 30%.Figure 1Of the major product categories shown, DRAM/SRAM is forecast to show the largest increase in spending, but flash memory is expected to account for the largest share of capex spending this year (Figure 2).  Capital spending for the DRAM/SRAM segment is forecast to show a 41% surge in 2018 after a strong 82% increase in 2017.  Capital spending for flash memory is forecast to rise 13% in 2018 after a 91% increase in 2017.Figure 2After two years of big increases in capital expenditures, a major question looming is whether high levels of spending will lead to overcapacity and a softening of prices.  Historical precedent in the memory market shows that too much spending usually leads to overcapacity and subsequent pricing weakness.  With Samsung, SK Hynix, Micron, Intel, Toshiba/Western Digital/SanDisk, and XMC/Yangtze River Storage Technology all planning to significantly ramp up 3D NAND flash capacity over the next couple of years (and new Chinese memory startup companies entering the market), IC Insights believes that the future risk for overshooting 3D NAND flash market demand is high and growing.
2018-08-30 00:00 阅读量:992
  • 一周热料
  • 紧缺物料秒杀
型号 品牌 询价
BD71847AMWV-E2 ROHM Semiconductor
MC33074DR2G onsemi
TL431ACLPR Texas Instruments
RB751G-40T2R ROHM Semiconductor
CDZVT2R20B ROHM Semiconductor
型号 品牌 抢购
TPS63050YFFR Texas Instruments
IPZ40N04S5L4R8ATMA1 Infineon Technologies
ESR03EZPJ151 ROHM Semiconductor
BU33JA2MNVX-CTL ROHM Semiconductor
STM32F429IGT6 STMicroelectronics
BP3621 ROHM Semiconductor
热门标签
ROHM
Aavid
Averlogic
开发板
SUSUMU
NXP
PCB
传感器
半导体
相关百科
关于我们
AMEYA360微信服务号 AMEYA360微信服务号
AMEYA360商城(www.ameya360.com)上线于2011年,现 有超过3500家优质供应商,收录600万种产品型号数据,100 多万种元器件库存可供选购,产品覆盖MCU+存储器+电源芯 片+IGBT+MOS管+运放+射频蓝牙+传感器+电阻电容电感+ 连接器等多个领域,平台主营业务涵盖电子元器件现货销售、 BOM配单及提供产品配套资料等,为广大客户提供一站式购 销服务。

请输入下方图片中的验证码:

验证码