novosns:Must-Know Basic Facts about Digital Isolators
  Electrical isolation is a crucial concept in the design of electrical systems. Through the isolation of the high and low voltage systems, the following important functions can be achieved:  1.Make the high and low voltage systems independent of each other and improve the anti-interference capability of the low voltage system;  2.Ensure safe interaction between the high and low voltage systems so that the systems can work safely;  3.Protect users' personal safety by avoiding electric shock from the high voltage.  In this Technical Sharing, the basics of electrical isolation will be introduced in detail, including: the definition and importance of electrical isolation, the classification and definition of isolation levels, and the standards and specifications for isolator certification.(Click here to watch the video  Definition and Importance of Electrical Isolation  Electrical isolation uses isolators to prevent destructive electrical signals from being transmitted between high/low voltage subsystems, while allowing safe electrical signals required for system operation to interact between high/low voltage systems. Three system interaction scenarios are discussed below:  1.When two low voltage systems interact, electrical signals can be freely transmitted between the two systems. In this state, we usually consider the systems to be working safely.  2.When high/low voltage systems interact directly without isolators, due to the high potential difference between the high voltage system and the low voltage system, the high voltage system may transmit destructive electrical signals to the low voltage system, which will cause the low voltage system to malfunction and even cause permanent damage to the low voltage system. This will not only affect the functional safety of the systems, but also endanger personal safety and lead to major safety accidents.  3.After using an isolator for electrical isolation between the high and low voltage systems, destructive electrical signals are blocked by the isolator. Safe electrical signals required for normal system operation interact between the high and low voltage systems, ensuring the functional safety of the systems.  Classification and Definition of Isolation Levels  Based on different isolation performance, electrical isolation is divided into different isolation levels. Functional isolation, basic isolation, dual isolation, and reinforced isolation are among typical isolation levels:  1. Functional isolation can only achieve the isolation necessary for normal device operation and does not have the function of electric shock protection, such as the PCB materials on the circuit boards.  2. Basic isolation only provides single-stage isolation and can achieve isolation while the insulation layer is intact. But once the insulation layer fails, the system will be at risk of electric shock. Under normal circumstances, the isolation voltage that basic isolators can isolate is around 3kV, and there are a few basic isolators whose isolation voltage can reach 5kV.  3. Dual isolation adds a layer of isolation on the basis of basic isolation to achieve system redundancy. It can ensure system security when single-stage isolation fails. In this way, the isolation voltage can reach 5kV and above.  4. Reinforced isolation is also single-stage isolation, but it can achieve the same isolation strength as dual isolation.  Standards and Specifications for Isolator Certification  Currently, most common isolators adopt basic and reinforced isolation. In order to be certified for the two isolation levels, the performance of isolators needs to comply with regional codes and safety standards.  In terms of isolator standards and certifications, the International Electrotechnical Commission (IEC) is the earliest non-governmental international electrotechnical standardization organization in the world. IEC works with organizations in multiple regions to develop international safety standards for electrical/electronic devices. In different regions, local standards are developed by different organizations. For example, the United States, Canada, Germany, and China all have local organizations.  Isolators must meet local standards before they can be legally marketed commercially and the electrical/electronic products fitted with them can be sold to customers. Typically, the first page of an isolator datasheet will list standards that the isolator has passed. The reinforced isolation level of digital isolators is mainly proposed by VDE and promoted by IEC as a global standard. NOVOSENSE is a leader of digital isolators in China and also the first semiconductor company to obtain the VDE enhanced isolation certification.  Under the current VDE standards, both basic and reinforced isolators have corresponding test standards and parameter specifications.  In the maximum surge voltage test, both basic and reinforced isolators are required to pass 50 bipolar surge impulses, and the final measured voltage must not exceed 1.3 times the maximum surge voltage in the datasheet, on top of which reinforced isolators are required to pass a surge voltage test of at least 10kV.  In applications, partial discharge phenomenon occurs when there are defects inside a device. which does not affect the insulation strength in short time. However, under the repeated impact of high voltage, the defect will eventually lead to the breakdown of the isolation barrier. Therefore, these defects need to be detected through non-destructive testing. Basic isolators need to pass the 1.5 times VIOSM surge test, while partial discharge testing of reinforced isolators needs to be conducted at 1.875 times VIOSM voltage.  According to the working life of chips at different temperatures and voltages, the working voltage of chips under the minimum rated life and failure rate during the target life can be fitted by Weibull distribution, and then VIORM and VIOSM can be obtained according to the requirements of the VDE correlation coefficients. It can also be seen from the table that reinforced isolators have a longer working life and a lower failure rate during their life.  After passing the above tests, isolators are deemed to have met the requirements for VDE certification.  To sum up, electrical isolation involves the working safety of devices and the personal safety of users, and is an indispensable part of electrical system design.
Key word:
Release time:2024-04-09 11:58 reading:708 Continue reading>>
Basic knowledge of electronic components What are the types of automotive chip?
  Electronics has become an integral part of our lives, and its influence extends far beyond traditional electronic devices to a wide range of applications in the automotive industry. Among them, the automotive chip is an important part of electronics in the automotive industry. This AMEYA360 electronic components procurement network will focus on the basic concept of automotive chips and the common types of automotive chips.  What is an automotive chip?  Automotive chip is a kind of chip in embedded system, mainly used in automotive control units or other electronic devices. The automotive chip controls various functions in the car, such as engine control, seat adjustment, audio system, etc., through the processor and other circuits. The main function of an automotive chip is to keep the car running efficiently, stably and safely under various extreme conditions.  Common types of automotive chips  1. Microcontroller (MCU)  Microcontrollers are the most common type of automotive chips. They are usually used to control various electrical devices in a car, including engine control, door locking, breathing lights, power windows, etc. Microcontrollers are generally composed of memory, processor, input/output interfaces and other logic circuits embedded in a single chip.  2. Sensor chip  Sensor chips are another important part of the automotive chip. They are used to measure various physical parameters in the car, such as temperature, light, sound, etc., and to transmit these parameters to the car control unit. Sensor chips usually consist of sensors and interface circuits.  3. Analog to Digital Converter (ADC)  ADC is a chip that converts analog signals to digital signals. They are usually used to measure electrical signals in a car and convert these signals to digital signals so that computer systems can read and process them. ADCs usually consist of an analog front-end circuit, a sampling circuit and a digital circuit.  4. Signal Processor (DSP)  A signal processor is a chip dedicated to digital signal processing. In automobiles, DSPs are typically used for audio processing and image processing. They usually consist of a processor core and associated logic circuitry.  5. Memory (Memory) chip  A memory chip is a chip used to store data. In cars, memory chips are typically used to store data that is constantly updated as the car is used, such as maintenance records, driver preferences, and music libraries.  An automotive chip is a group of embedded chips specifically designed to control various electrical devices in a car. Common types of automotive chips include microcontrollers, sensor chips, analog-to-digital converters, signal processors and memories. For automakers, choosing the right automotive chip can greatly improve the performance, efficiency and safety of a vehicle.
Release time:2023-05-22 13:57 reading:3119 Continue reading>>
Facebook Builds Chip Team, <span style='color:red'>ASIC</span>
A Facebook executive confirmed reports that the social networking giant is hiring chip engineers and designing at least one ASIC. The news came at the @Scale event here, where Facebook announced that five chip companies will support Glow, an open-source, deep-learning compiler that it backs.Facebook “is absolutely bringing up a silicon team focused on working with silicon providers, and we have a chip we’re building, but it’s not our primary focus,” said Jason Taylor, vice president of infrastructure at Facebook. The chip is “not the equivalent of [Google’s] TPU” deep-learning accelerator, he added, declining to provide further details on its focus or time frame.Working with the estimated 50 companies designing AI acceleratorsis one focus for the new Facebook chip group. “There will be a lot of [accelerator] chips in the market,” said Taylor at a press roundtable. “The big question is whether the workloads they are designed for are the important ones at the time.”In a keynote, Taylor described Glow as a generic compiler to let developers target any of the emerging deep-learning accelerators for inference in the cloud or at the edge of the network. It does not target client systems such as smartphones.“We expect that there will be hardware fragmentation [in inference accelerators]. Our work with Glow is to help machine-learning experts design neural nets and not have to do the work required to tune them” to each unique chip.“We know that the fragmentation is coming because no one knows what combination of [hardware] resources [such as on-chip memory blocks and multiply-accumulate arrays] will win, so we’ll let developers focus on the high-level graphs without hand-coding for the specifics of hardware.”Jason Taylor described Glow as a compiler for inference on cloud and edge networks. (Images: Facebook)Glow takes an AI graph produced by a framework such as TensorFlow or Caffe2 and renders it into byte code for hardware accelerators, explained Taylor. The compiler includes several tools including an instruction scheduler, a linear algebra optimizer, a memory allocator to generate efficient code for a chip’s specific memory configuration, and a CPU-based reference implementation for testing the accuracy of the hardware, according to a Facebook blog.Cadence, Esperanto Technologies, Intel, Marvell, and Qualcomm said that they will support Glow on future chips. Taylor said that he expects to add others to the list. “That’s one of the benefits of it being open-source.”One senior chip expert described Glow as a framework for deploying a neural network in production systems. Its input would be a graph created in a framework such as TensorFlow or Caffe2.Some established chipmakers already supply similar software. For example, Nvidia’s Tensor RT takes in a graph from a framework and outputs Cuda code for its GPUs.Traditionally, compilers are tightly optimized for a specific chip. But “what a compiler is these days is quite a bit broader than in the past — the kinds of optimizations in Glow have to do with identifying large portions of a graph that can be rendered to a hardware accelerator,” said Taylor.Glow is the latest example of an effort to plug the gap between software and hardware in the fast-moving world of deep learning. For example, Nvidia’s Tensor RT is now in its fifth version, though it was first released just a year ago. Some accelerator startups express frustration at the level of work needed to support the wide variety of software frameworks and their changes.Facebook, Microsoft, and others are backing ONNX, a standard way to express a graph with its weights. In December, the Khronos Group released NNEF, a hardware abstraction layer for deep-learning accelerators.For its part, Glow is a single component of Pytorch 1.0, a collection of open-source projects that includes merged Caffe2 and Pytorch frameworks. The first developer conference for Pytorch 1.0 is slated for October in San Francisco.In a separate talk, Facebook engineering manager Kim Hazelwood rattled off a list of a dozen different deep-learning workloads that the social network uses, employing at least four different kinds of neural nets. Every day, the AI apps generate more than 200 trillion inferences, translate more than five billion texts, and automatically remove more than a million fake accounts.Some of Facebook’s inference tasks require 100 times more compute than others, she said. Today, Facebook runs the jobs on a handful of CPU and GPU servers that it has designed.Moving from general-purpose to custom hardware would require tailoring chips specific to those still-changing workloads, Hazelwood told EE Times after her talk. She declined to give any insights into Facebook’s thoughts on using any custom AI accelerators.Facebook alone uses at least five kinds of neural networks across at least a dozen deep-learning apps. Click to enlarge.One observer speculated that Glow would be an ideal tool to enable the company to adopt a handful of accelerators suited to its various workloads. Its semiconductor team could help cull through the options to select a few chips and perhaps suggest customizations for some of them.Separately, Facebook posted a blog describing a new software tool that it created that uses deep learning to debug code. SapFix can automatically generate fixes for specific bugs and then propose them to engineers for approval and deployment to production, it said.So far, Facebook has used SapFix to accelerate the process of shipping code updates to millions of devices using the Facebook Android app. Facebook said it will release a version of the tool but did not state when.
Key word:
Release time:2018-09-17 00:00 reading:2343 Continue reading>>
Intel FPGA Unit to Buy e<span style='color:red'>ASIC</span>
Intel Corp. bid to buy eASIC for an undisclosed sum, aiming to fold to pioneer of a low-cost alternative to ASICs into its FPGA group. The x86 giant aims to accelerate eASIC's road map and hire all its 120 employees including eASIC CEO Ronnie Vasishta when the deal closes, probably in the third quarter.Back in 2015 when it decided to scuttle plans for an IPO, eASIC had reported revenues of $67.4 million and a $1.1 million loss. Since that time, it rolled out products in a 28nm process and talked about plans to support ARM cores. Investors in the 19-year-old company are said to have balked at supporting its plans for future nodes and Arm cores.“It’s a small company today but we believe we can scale it and make it a real differentiator over Xilinx,” Intel’s larger rival in FPGAs, said Dan McNamara, general manager of Intel’s Programmable Solutions Group, formerly Altera.eASIC defined a proprietary approach for taking a wafer with pre-defined logic and memory and customizing it with interconnects in just one or two mask layers. The resulting structured ASIC has a fraction of the up-front cost of a full ASIC although it lacks its programmability. At one time, Intel and LSI Logic were among rivals offering roughly similar approaches.“It’s a great technology…we had been looking at strategic partnerships with them when we decided to just buy the company,” said McNamara “They have a good patent portfolio and a solid team. Our job is to scale this business quickly and get them the investment they have needed,” he added.If the deal goes ahead as planned, eASIC could be offering Arm cores early next year. Longer term, its next-generation products could offer Intel’s Embedded Multi-Die Interconnect Bridge (EMIB), a proprietary die-to-die package that is a low-cost rival to 2.5D chip stacks.So far, Intel’s only public products using EMIB have been its FPGAs that employed it as a bridge to serdes, HBM memory stacks and Xeon processors. “We have a bunch of sampling products today in different verticals and a lot of products coming in next couple years from Intel leveraging EMIB,” he said.EMIB has not seen use in third-party products, although it was introduced as a differentiator for Intel’s still-small foundry business. Long term, Intel also will consider making eASIC chips. TSMC and Globalfoundries make eASICs latest products.Dan McNamara (left) with eASIC CEO Ronnie Vasishta outside Intel's headquarters. (Image: Intel)The deal is not expected to move the needle significantly in Intel’s competition with Xilinx in FPGAs. Both companies have been reporting solid growth with annual revenues north of $2 billion for their products used in a wide variety of markets including machine learning.In its first quarter, Intel reported sales of FPGAs were up 17% year-over-year and a whopping 150% in the data center where they are used as accelerators on Microsoft’s new servers. In March, Xilinx, which commands 49% of the FPGA market according to Semico Research, announced Everest, a 7nm accelerator for big data and AI it aims to ship next year.
Key word:
Release time:2018-07-13 00:00 reading:1196 Continue reading>>
AI Comes to <span style='color:red'>ASIC</span>s in Data Centers
Three years ago, when AI chip startup Nervana ventured into the uncharted territory of designing custom AI accelerators, the company’s move was less perilous than it might have been, thanks to an ASIC expert that Nervana — now owned by Intel — sought for help.That ASIC expert was eSilicon.Two industry sources independently told EE Times that eSilicon worked on Nervana’s AI ASIC and delivered it to Intel after the startup was sold. eSilicon, however, declined to comment on its customer.Nervana’s first-generation AI ASIC, called Lake Crest, was one of the most-watched custom designs for AI accelerators.Leveraging its own cumulative work with the customer on the design of AI/2.5D systems, Santa Clara, California-based eSilicon rolled out this week a machine-learning AI platform called “neuASIC.” The platform includes “a library of AI-targeted functions that can be quickly combined and configured to create custom AI algorithm accelerators,” explained eSilicon.A novice AI chip designer could be advised to lean on someone like eSilicon just as Nervana did. After all, the ASIC expert could give some comfort and solid technology foothold in AI ASICs backed by its own real-world experience.With many companies planning AI chips optimized for certain AI workloads, their biggest roadblock is the constant state of change in AI algorithms. As Patrick Soheili, vice president of business and corporate development at eSilicon, noted, everyone knows that ASICs provide the best power and performance for AI acceleration. But an incautious designer could end up with a static ASIC that’s obsolete on arrival.That’s where eSilicon hopes to come in.By using such tools as a Design Profiler and AI Engine Explorer, a variety of IP — some developed by eSilicon and others from third parties — available on the neuASIC platform can be configured as “AI tiles,” explained the company. By turning a knob on eSilicon’s tools, customers can do early power, performance, and area analysis of various candidate architectures, noted Soheili.Knowing the functions that need to be added and customized, Soheili said that eSilicon can add them as AI titles to the neuASIC platform. This will offer eSilicon’s customers much-needed flexibility.In short, the neuASIC platform provides a scalable, configurable ASIC chassis with swappable AI tiles, according to eSilicon.eSilicon’s ASIC chassis offers scalable ASIC architecture One industry source who spoke on the condition of anonymity pointed out that “eSilicon isn’t known for its own ‘inherent’ AI skills (like a breakthrough AI processor architecture, for example).” But he added that packaging a collection of IPs, including some off the shelf, is “not a bad thing.” It gives customers “a starting point for custom ASICs.”There’s one caveat, though. He said, “It’s still left up to the customer to know how to architect something novel from those building blocks.”Memory blocksAsked which IP offered on eSilicon’s neuASIC platform is particularly unique, Richard Wawrzyniak, principal analyst for ASIC & SoC at Semico Research Corp., told EE Times that it would be memory blocks.“They have created memory blocks that incorporate some logic functions, so they are more efficient in operation. These are called Mega Cells: hardware-optimized AI primitives and functions. These blocks include MAC blocks, transpose memories, and multi-port memories.”He added, “The idea of some ‘computing-in-memory functionality is starting to catch on with a few academic papers recently being discussed at ISSCC. But so far as I know, eSilicon is the only IP vendor to offer a version of this concept commercially to the market.”Furthermore, Wawrzyniak noted, “Beyond discrete IP blocks, eSilicon has created what they call Giga Cells: full AI IP subsystems. These subsystems include swappable AI tiles, a convolution engine, and other AI functions.”AI ASICs for data centersSo who would be developing these new AI ASICs?Purpose-built chips for running artificial-intelligence tasks in data centers are all the rage among the Super 7 (Amazon, Facebook, Google, Microsoft, Alibaba, Baidu, and Tencent). Big internet companies such as Amazon and Facebook are looking for their own AI chips, explained Soheili. They follow in the footsteps of Google, which first unveiled a TPU, its own AI accelerator ASIC specifically designed for neural-network machine learning.GPUs and FPGAs are key engines that have driven — and still drive — AI in data centers. But when Google announced a TPU in 2016, it explained that it chose to build its own ASIC rather than doubling its data center footprint. The real issue that internet giants confront is the cost of maintaining big data centers in so many different places.Soheili said that the Super 7 want AI ASICs optimized for certain AI workloads, “because they need to keep the cost of ownership down for their data centers.” As data centers proliferate, maintenance costs and capital expenses for hardware, software, infrastructure, and especially power consumption are getting out of hand, he explained.Soheili sees eSilicon’s potential customers not only among the Super 7, but also some 50 AI chip startups developing accelerators to be installed in data centers.Expect more competition in IP subsystems for AIIf the industry’s appetite for AI ASICs is indeed soaring, who else will compete with eSilicon to help companies design them?Semico’s Wawrzyniak told us, “I believe Broadcom and GlobalFoundries also offer a platform and services to customers to allow them to do a design for AI silicon.”He added, “However, Broadcom is only currently working with TSMC, and GlobalFoundries will want to produce the silicon once the design using their platform is completed.” That makes eSilicon “the only tier-one ASIC provider that does offer their customers a choice of TSMC or Samsung as the foundry partner for AI designs currently.”eSilicon styles itself as a key service provider with a platform that offers “customized, targeted IP offered in 7-nm FinFET technology and a modular design methodology.”Meanwhile, there are plenty of IP vendors who claim to offer IP blocks that are geared toward providing or facilitating AI functionality, added Wawrzyniak.“Some notable examples are ARM, Cadence Tensilica, Synopsys, Ceva, Imagination, Videantis, MIPS, and others. The parade of these types of AI-assist IP blocks is just starting, and I expect many others to enter the market offering these types of capabilities to designers.”Furthermore, Wawrzyniak noted, “eSilicon is the first company I am aware of that is creating IP subsystems specific to AI functionality, and I also expect many others to follow them into this area.”Where 2.5D mattersWhen Intel announced its Nervana neural network processor (NNP), it dwelt outspokenly on the speed necessary in training deep-learning networks. Intel found its answer in “high-capacity, high-speed high-bandwidth memory (HBM) to provide the maximum level of on-chip storage and blazingly fast memory access” while using separate pipelines for computation and data management.To help Nervana, eSilicon appears to have provided “custom pseudo two-port memories designed by eSilicon, TSMC’s Chip-on-Wafer-on-Substrate (CoWoS) technology, 28G SerDes, and four second-generation high-bandwidth memory stacks (HBM2).” Furthermore, eSilicon offered 2.5D ecosystem management, silicon-proven HBM2 PHY, ASIC physical design, 2.5D package design, manufacturing, assembly, and testing.All of these key elements for AI acceleration — including the 2.5D integration of HBM2 memories — are now part of the neuASIC platform’s offering. They are designed to provide vastly higher bandwidth, highly parallel connections to memory stacks, and significant reduction in power consumption.The first clue that Nervana might be an eSilicon customer, according to one industry source, was that “the eSilicon block diagram looks a lot like the [early] Nervana chip.”Intel’s Nervana is a large linear algebra accelerator on a silicon interposer next to four 8-Gbyte HBM2 memory stacks. (Source: Hennessy and Patterson, “Computer Architecture: A Quantitative Approach”)Without naming names, eSilicon claimed in its announcement that the company built the industry’s first AI ASIC.Soheili told us that eSilicon is currently engaged with several tier-one system providers and high-profile startups to deploy the neuASIC platform and its associated IP. Initial applications, according to eSilicon, will focus on the data center and information optimization, human/machine interaction, and autonomous vehicles.
Key word:
Release time:2018-06-07 00:00 reading:1070 Continue reading>>
MediaTek Goes Back to <span style='color:red'>ASIC</span>
  BARCELONA — MediaTek is going back into the ASIC business.  In a one-on-one interview with EE Times at the Mobile World Congress, Joe Chen, president of MediaTek, told us that the Taiwan consumer chip giant has big plans to provide “premier ASIC design services” to system vendors in consumer, communication and computing markets.  “MediaTek has been known as an ASSP company for a long time,” said Chen, “but we now see growth opportunities by becoming an ASIC company.”  But hang on. Didn’t most chip companies ditch the ASIC business model in favor of a “platform-based” ASSP business? Unless you know up front that your ASIC customer’s product will sweep the consumers in certain market segment off their feet, it’s hard to justify the cost of ASIC designs.  Chen acknowledged, “Yes, true, over the last 20 years, the ASIC market has shrunk.”  But he quickly added, “We think the industry has also changed a lot over time.”  Consider, for example, Apple or Samsung in the smartphone business. Both have succeeded with their internally-designed proprietary chip solutions that differentiate their end products, Chen said.  Other large system vendors, too, are looking for “unique” solutions of their own, he said. However, as Chen pointed out, many system companies have either lost — or have not kept up — their internal ASIC expertise.  Sizable ASIC business  Around 2015, MediaTek began noticing changes in the air, as big customers began to demand unique solutions for their systems. In addition to ASICs that MediaTek developed for game consoles for Sony and Microsoft, Chen said, “By 2017, we’ve reached a milestone by winning a sizeable ASIC business in the consumer space.”  In 2018, MediaTek is poised to expand its ASIC business further by wooing customers in the communication segment. Chen, however, is neither naming the names of his ASIC customers nor disclosing specific product categories for which the company is designing ASICs.  In recent years, the global electronics market has gone through substantial transformations. First, as Chen explained, big wins have been limited to a handful of system companies, who have won by developing their own captive solutions. Second, a host of M&A activities in recent years among global semiconductor companies has decreased the number of chip suppliers. The merged companies are bigger, but are not necessarily growing by creating new business models.  MediaTek is among a few survivors in the global chip industry with a plan for growing more organically, including its stated goal of moving back into the ASIC service business.  Asked who offer ASIC services these days, Chen said that there are companies such as Faraday Technology and GUC, both based in Taiwan, offering SoC design services.  Chen hopes to come into the market by delivering much differentiated ASIC services. “We will go far beyond ‘place and route’ provided by Faraday or GUC,” he said.  As Chen sees the outlook, MediaTek has two factors going for its ASIC ambitions. “First, we have a broad-range IP portfolio… cutting-edge IPs in everything from communications, computing and connectivity to multimedia, other peripherals and RF,” Chen noted.  Second, “In ASIC business, your customers need to trust you,” he added. “For a big ASIC solution, you need a partner who has good reputation.” Chen believes that MediaTek has earned that reputation.  MediaTek has a team of ASIC designers separate from the company’s ASSP product development teams. ASIC business requires separate culture and methodologies, Chen explained.
Key word:
Release time:2018-02-27 00:00 reading:1084 Continue reading>>
Carriers Step Away from <span style='color:red'>ASIC</span>s
  SAN JOSE, Calif. — A group of telecom carriers will demonstrate at Mobile World Congress progress moving their networks off proprietary systems and on to open source software. The Open Networking Foundation (ONF) will show its latest code based on the P4 programming language running on systems using chips from Barefoot Networks, Cavium and Mellanox.  The demo marks a significant change for ONF that initially based its work on its OpenFlow protocol. The group shifted last year to P4, an open source project launched by Barefoot, after vendors hit limits with OpenFlow.  The group’s goal now is to evangelize support for P4 among networking leaders. Broadcom, the dominant vendor of merchant switch chips, is said to be showing interest in P4.  So far Cisco, the leading provider of ASIC-based networking systems is not showing interest P4. ONF aims to cultivate an ecosystem of so-called white-box networking OEMs such as Quanta and Delta using its open source software.  It’s still early days for ONF’s open source code for the style of edge-cloud networks carriers want to deploy. The so-called edge clouds aim to be more open, lower cost alternatives to the central offices carriers use today based largely on systems running a complex mix of ASICs and proprietary protocols.  With the shift to P4, ONF has code for access networks in field trials but software for mobile core nets is still in the lab with trials a year or two away.  “Were at an interesting inflection point,” said Timon Sloane, vice president of marketing and ecosystems for ONF, calling the latest demos a second-generation of software-defined networks (SDNs).  The latest demo is “a glimpse of the edge cloud of the future,” blending in work on SDN standards such as ETSI’s Network Functions Virtualization, he said.  “We learned a ton from OpenFlow, but it has limitations, so the community strategically shifted to P4 and the P4 runtime to solve problems in a more comprehensive way,” said Sloane, adding ONF is no longer actively developing the protocol.  OpenFlow has been used by Google, China Mobile and others to access the data forwarding pipeline of networking ASICs. However, it cannot access all their functions and, unlike P4, it doesn’t enable programming that pipeline.  “OpenFlow turned out to be non-deterministic with nuanced differences between systems so tiny adjustments were needed for different ASICs. That hampered the ability to bring on multiple suppliers. P4 is more deterministic and allows a complete definition of the forwarding pipeline,” he said.  The P4 code is available as open source and was first demoed in September. The current demo is the first using systems from multiple vendors. Later this year, ONF hopes to host another demo with at least two more silicon vendors participating.  Startup Barefoot created its chips in tandem with the P4 open source project. Cavium and Mellanox modified firmware of their existing chips to support P4 in the demo.  To date, Broadcom has focused on making the programming interfaces for its switches more accessible; last week it made its table APIs open source. Sloane characterized the move as a step in the right direction, opening up its forwarding plane but not yet making it fully reconfigurable.
Key word:
Release time:2018-02-06 00:00 reading:1203 Continue reading>>
AI to Spur Uptick in <span style='color:red'>ASIC</span> Design Starts
  Design starts for artificial intelligence (AI) voice-activated device ASICs will be increasing at a compound annual rate approaching 20 percent by 2021, nearly twice the 10.1 percent CAGR of all ASIC design starts between 2016 and 2021, according to a new report by Semico Research.  With the surge in popularity of voice-activated digital assistants such as Amazon Echo and Google Home, plus the general frenzy of work being done around AI, both startups and established companies are pushing hard to develop silicon to add voice activated capabilities and other AI features to products, especially in the consumer arena.  According to Rich Wawrzyniak, senior analyst for ASIC/SoC research at Semico, AI in the form of pattern recognition, voice recognition and language translation will find its way into almost every device and application that has a processor, DSP or FPGA and some level of computational resources in coming years.  "There is demand for these types of capabilities at every level and in every market segment to some degree," Wawrzyniak said in a press statement. "We believe these capabilities will become 'check-box' items at the very least and could spark the next great surge in the semiconductor market."  Semico's report on ASIC design starts for 2017, authored by Wawrzyniak, projects that ASIC unit shipments will grow by 10.1 percent from 2016 to 2021, led by growth in the industrial and consumer market segments. Total ASIC shipments grew by 7.7 percent globally last year, according to the report.  While many of the traditional end-use applications are experiencing slower growth rates due to market saturation and reduced demand, while those associated with the Internet of Things (IoT) are taking off, according to the report.  In addition to IoT and AI, ASIC growth rates for products associated with the smart grid, wearable electronics, solid state drives, drones, industrial IoT, advanced driver assistance systems (ADAS) and 5G infrastructure are also expected to grow faster than the broader market, Semico said.  Basic SoC design starts in the consumer segment are expected to grow at a 19 percent CAGR and industrial IoT ASIC design starts are projected to grow by 25 percent through 2021, according to the Semico report.  The report also predicts that IoT ASIC unit shipments will eclipse 1.8 billion units next year.
Key word:
Release time:2017-11-20 00:00 reading:1110 Continue reading>>
Bitcoin <span style='color:red'>ASIC</span> Maker Bets on AI
  A Beijing-based company that got its start designing ASICs for bitcoin mining announced it is sampling its first machine-learning accelerator. Bitmain Technologies’ BM1680 is optimized for both training and inference on deep neural networks.  The chip is sold in a fan-cooled module called the SC1. Bitman said it has been trained for Alexnet, Googlenet, VGG and Resnet neural networks and is compatible with Caffe, Darknet, Yolo and Yoto2 models as well.  Bitmain counts China’s big data center operators — Alibaba, Baidu and Tencent — among its top targets. It said it will also consider building its own machine-learning service.  The company plans to release technical details of the chip in a talk by its chief executive, Micree Zhan, at a Beijing event on November 8. It targets a wide range of use including image and speech recognition, autonomous vehicles and enhanced security cameras.  Bitmain was founded in 2013 and released its first chip, a SHA-256accelerator for bitcoin mining, in November of that year. It claimed strong sales until late 2014, when the Mt. Gox exchange went bankrupt, tanking the bitcoin market.  The company made a comeback with a BM1384 chip as the bitcoin market rose in 2015. Executives claimed in a recent report they now have more than 600 employees, sell hundreds of thousands of mining systems a year and run their own mining business.  By the end of 2015, Bitmain decided to start work on a machine-learning chip. “Now after only a year and a half, we have the mass-production chips in hand,” said Zhan, in a press statement.
Release time:2017-10-26 00:00 reading:2601 Continue reading>>

Turn to

/ 1

  • Week of hot material
  • Material in short supply seckilling
model brand Quote
MC33074DR2G onsemi
TL431ACLPR Texas Instruments
RB751G-40T2R ROHM Semiconductor
CDZVT2R20B ROHM Semiconductor
BD71847AMWV-E2 ROHM Semiconductor
model brand To snap up
BP3621 ROHM Semiconductor
STM32F429IGT6 STMicroelectronics
TPS63050YFFR Texas Instruments
BU33JA2MNVX-CTL ROHM Semiconductor
IPZ40N04S5L4R8ATMA1 Infineon Technologies
ESR03EZPJ151 ROHM Semiconductor
Hot labels
ROHM
IC
Averlogic
Intel
Samsung
IoT
AI
Sensor
Chip
About us

Qr code of ameya360 official account

Identify TWO-DIMENSIONAL code, you can pay attention to

AMEYA360 mall (www.ameya360.com) was launched in 2011. Now there are more than 3,500 high-quality suppliers, including 6 million product model data, and more than 1 million component stocks for purchase. Products cover MCU+ memory + power chip +IGBT+MOS tube + op amp + RF Bluetooth + sensor + resistor capacitance inductor + connector and other fields. main business of platform covers spot sales of electronic components, BOM distribution and product supporting materials, providing one-stop purchasing and sales services for our customers.

Please enter the verification code in the image below:

verification code