China responds to US tariff with their own charges and new AI chip

发布时间:2018-09-25 00:00
作者:Ameya360
来源:newelectronics
阅读量:987

The tariff dispute between China and the US continues with the US imposing tariffs on an additional $200billion on Chinese products.

China responds to US tariff with their own charges and new AI chip

Consumer tech including smart watches and Bluetooth speakers have managed to avoid these tariffs, but it has been reported that home modems, Internet gateways and routers haven’t escaped the charges.

China has, according to Xinhau news agency, accused the US for employing “trade bullyism practices”.

In response to US actions, Beijing has announced a tariff on the US worth $60bn.

In related news, Alibaba, the Chinese e-commerce company, has launched a semiconductor company, called Pingtouge, in an effort to develop its own AI chip - AliNPU. This news comes after Jack Ma, the company’s soon to be retired CEO, was reported saying that China needs to control core technology – including chips – to avoid over relying on US imports.

AliNPU will be designed to support Cloud technology and IoT devices such as autonomous driving, smart cities and smart logistics.

The company will also be looking to develop quantum processors.

(备注:文章来源于网络,信息仅供参考,不代表本网站观点,如有侵权请联系删除!)

在线留言询价

相关阅读
Tech Giants Launch AI Arms Race, Aiming to Spark a Wave of Smartphone and Computer Upgrades
  According to CNA’s news, the potential business opportunities in artificial intelligence have spurred major tech giants, with NVIDIA, AMD, Intel, MediaTek, and Qualcomm sequentially launching products featuring the latest AI capabilities.  This AI arms race has expanded its battleground from servers to smartphones and laptops, as companies hope that the infusion of AI will inject vitality into mature markets.  Generative AI is experiencing robust development, with MediaTek considering this year as the “Generative AI Year.” They anticipate a potential paradigm shift in the IC design industry, contributing to increased productivity and significantly impacting IC products.  This not only brings forth new applications but also propels the demand for new algorithms and computational processors.  MediaTek and Qualcomm recently introduced their flagship 5G generative AI mobile chips, the Dimensity 9300 and Snapdragon 8 Gen 3, respectively. The Dimensity 9300, integrated with the built-in APU 790, enables faster and more secure edge AI computing, capable of generating images within 1 second.  MediaTek points out that the smartphone industry is experiencing a gradual growth slowdown, and generative AI is expected to provide new services, potentially stimulating a new wave of consumer demand growth. Smartphones equipped with the Dimensity 9300 and Snapdragon 8 Gen 3 are set to be released gradually by the end of this year.  Targeting the AI personal computer (PC) market, Intel is set to launch the Meteor Lake processor on December 14. Two major computer brands, Acer and ASUS, are both customers for Intel’s AI PC.  High-speed transmission interface chip manufacturer Parade and network communication chip manufacturer Realtek are optimistic. The integration of AI features into personal computers and laptops is expected to stimulate demand for upgrades, leading to a potential increase in PC shipments next year.  In TrendForces’ report on November 8th, it has indicated that the emerging market for AI PCs does not have a clear definition at present, but due to the high costs of upgrading both software and hardware associated with AI PCs, early development will be focused on high-end business users and content creators.  For consumers, current PCs offer a range of cloud AI applications sufficient for daily life and entertainment needs. However, without the emergence of a groundbreaking AI application in the short term to significantly enhance the AI experience, it will be challenging to rapidly boost the adoption of consumer AI PCs.  For the average consumer, with disposable income becoming increasingly tight, the prospect of purchasing an expensive, non-essential computer is likely wishful thinking on the part of suppliers. Nevertheless, looking to the long term, the potential development of more diverse AI tools—along with a price reduction—may still lead to a higher adoption rate of consumer AI PCs.  Read more  Key Development Period for AI PCs in 2024; Global Notebook Market Set to Rebound to Healthy Supply-Demand Cycle with an Estimated Growth Rate of 3.2%, Says TrendForce。
2023-11-21 10:41 阅读量:330
AI Expands Role in Design
  SANTA CLARA, Calif. — Vendors and researchers are making significant progress applying machine learning to the thorny issues of chip design, according to a panel at DesignCon here. The use of AI in EDA was a hot topic that drew a standing-room-only crowd to the panel and spawned several papers at the event.  Over the past year, the Center for Advanced Electronics through Machine Learning (CAEML) has gained four new partners. The team of 13 industry members and three universities has expanded both the breadth and depth of its work.  “Last year, we focused mainly on signal integrity and power integrity, but this year, we diversified our portfolio into system analysis, chip layout, and trusted platform design — so the diversity of the research has made the most progress,” said Christopher Cheng, a distinguished technologist at Hewlett-Packard Enterprise and a member of CAEML.  “Work in Bayesian optimizations and convolutional neural networks in design-for-manufacturing have both advanced significantly in the capabilities we have demoed, and we’re starting to think about use of in-line learning in the design process,” said Paul Franzon, a distinguished professor at North Carolina State University, one of the group’s three host colleges.  “One of the challenges we face is getting access to data from companies,” said Madhavan Swaminathan, a professor at Georgia Institute of Technology, another CAEML host. “Most of their data is proprietary, so we’ve come up with several mechanisms to handle it. The processes are working fairly well, but they are more lengthy than we’d like.”  The group had a sort of coming-out party for itself at this event last year. It started with backing from nine vendors including Analog Devices, Cadence, Cisco, IBM, Nvidia, Qualcomm, Samsung, and Xilinx. Its initial interest areas included high-speed interconnects, power delivery, system-level electrostatic discharge, IP core reuse, and design rule checking.  EDA vendors such as Cadence Design Systems started following research in machine learning back in the early 1990s. The techniques first made their way into its products in 2013 with a version of Virtuoso that used analytics and data mining to create machine-learning models for parasitic extraction, said David White, a senior director of R&D at Cadence.  To date, Cadence has shipped more than 1.1 million machine-learning models for its tools to speed lengthy calculations. The next phase of product development is in place-and-routing tools that learn from human designers to recommend optimizations that speed turnaround time. The solutions may use a combination of local and cloud-based processing to harness parallel systems and large data sets, said White.  At advanced process nodes, global routing tools are hitting limits with their current algorithms, reducing chip data rates in their efforts to get timing closure, said Sashi Obilisetty, an R&D director at Synopsys.  For its part, TSMC reported last year a 40-MHz speed gain using machine learning to predict global routing, she noted, adding that Nvidia is already using machine learning to provide full coverage on chip designs while reducing simulations.  Panelists said that they see many opportunities for using a basket of machine-learning techniques to automate specific decisions and optimize overall design flows.  Specifically, researchers are exploring opportunities to replace today’s simulators with AI models that run more than an order of magnitude faster. Relatively slow simulators can lead to timing errors, mistuned analog circuits, and insufficient modeling that results in chip re-spins, said Swaminathan of Georgia Tech. In addition, machine learning can replace IBIS for behavioral modeling in high-speed interconnects.  Chip researchers are using data mining, statistical learning, and other tools in addition to the neural networking models popularized by Amazon, Google, and Facebook image searches and voice-recognition services.  Franzon of North Carolina State reported on the use of surrogate models to get to a final physical design optimization in four iterations, compared to 20 for an engineer. Similar techniques were used to calibrate analog circuits and to set up transceivers for multichannel interconnects.  AI can help automate the process by setting dozens of options in EDA tools, sometimes called knobs. “The tools have set up knobs with sometimes obtuse meanings that have vague relationships to desired outcomes,” said Franzon.  For its part, HPE is using neural nets and hyperplane classifiers to predict failures of solid-state drives in the field based on data about their voltage, temperature, and current.  “The amount of training data required is high,” said Cheng. “So far, the classifiers are static, but we want to add the dimension of time using recurrent neural networks so that instead of just good/bad labels, we will have time-to-failure labels. And in the future, we want to extend this work to more parameters and general system failures.”
2018-02-02 00:00 阅读量:1257
AI Chip Startup Graphcore Lands $50 Million in Funding
  Graphcore, a developer of processors for machine learning and artificial intelligence, secured $50 million in additional funding, bringing the total raised by the UK-based startup to about $110 million over the past 18 months.  Graphcore's series C funding, provided by venture firm Sequoia Capital, will be used to scale up production of the startup's first chip, which it calls an Intelligence Processing Unit (IPU). Graphcore plans to make the IPU available to early access customers at the beginning of next year.  In addition to scaling up production, the new funding will be used to help build a community of developers around Graphcore's Poplar software platform, driving the company's extended product roadmap and investing in its Palo Alto, Calif.-based U.S. team to help support customers, Graphcore (Bristol, U.K.) said.  "Efficient AI processing power is rapidly becoming the most sought-after resource in the technological world," said Nigel Toon, Graphcore's CEO, in a press statement. "We believe our IPU technology will become the worldwide standard for machine intelligence compute."  Graphcore, which is featured in the most recent edition of EE Times Silicon 60, is perhaps the furthest along of a crop of startups that have been formed to create new processor architectures for deep neural networks (DNNs). In addition to Graphcore, other well-funded startups include Wave Computing, Cerebras and Groq.  These startups and others are in the early days of battle with the likes of more established companies such as Google, which has offered its Tensor Processing Unit (TPU) custom ASIC for machine learning since last year, Nvidia GPUs and Intel, which acquired Nervana and plans to sample its Neural Network Processor ASSP next year.  Toon maintains that the performance of Graphcore’s processor "is going to be transformative" compared to other accelerators. The company last month shared preliminary benchmarks that it says demonstrate that the IPU can improve performance of machine intelligence training and inference workloads by 10-100 times compared with current hardware.  Previous investors in Graphcore include both Samsung Catalyst Fund, the venture capital arm of Samsung, and Robert Bosch Venture Capital.  Matt Miller, a partner at Sequoia, will join Graphcore's board of directors as the result of the funding, the company said. Bill Coughran, another partner at Sequoia, will join Graphcore's technical advisory board, the company said.
2017-11-15 00:00 阅读量:1354
AI Chip Rides Novel Networks
  An emerging company with a novel machine-learning technology and equally unique financial structure debuted its first hardware product today. BrainChip rolled out an FPGA-based accelerator for its spiking neural network (SNN) software and hopes to deliver an ASIC within two years to expand its existing markets.  SNNs are related but different from the convolutional neural nets (CNNs) now widely used and, some say, hyped by web giants for jobs like voice and image recognition. SNNs use a simpler, one-shot training method and are well-suited to tasks such as face recognition in low-resolution and noisy environments such as surveillance video. To date, BrainChip’s products are used mainly by law enforcement and security.  The BrainChip accelerator packs six SNN cores in a Xilinx Kintex chip on a PCI Express board processing video at up to 600 frames/s at about 15 W max. It provides as much as a six-fold performance boost for the BrainChip Studio software for x86 computers that the company rolled out in July at a cost starting at $4,000 per video channel. The company first described its architecture in late 2015.  Today’s emerging CNN chips essentially accelerate sparse linear algebra to shorten training loops. By contrast, BrainChip’s accelerator speeds processes in digital pathways said to mimic neural synapses, reinforcing or inhibiting traffic and setting thresholds as appropriate.  The card will be available at the end of September at a $10,000 list price for single units. The company will sell the card to system integrators. It also may sell integrated bundles of cards, software, and servers directly to end users.  The company claims that the card is the first commercial hardware to accelerate SNNs. IBM’s True North is more widely known but has been more of a general-purpose research vehicle, although the U.S. Air Force said in June that it would use it in a supercomputer. Stanford and the European Union also support research efforts in SNN accelerators.  BrainChip licensed technology for an ASIC to accelerate unsupervised learning in SNNs from a research group in Toulouse, France. The company is studying the potential in automotive, cybersecurity, financial, and medical markets to determine how to tailor the silicon that could be available in 12 to 24 months.  The company got its start 10 years ago as a spinout from the Toulouse University research effort that was creating custom software for users in France. BrainChip now consists of a software team in Toulouse, a hardware group in southern California, and last year, it brought on new management mainly in Silicon Valley.  A co-founder from Australia balked at financial terms of traditional venture capitalists. As an alternative, the team engineered a reverse takeover of an underactive mining company in Australia and raised $15 million from public investors on the Australian stock market, where it is now listed as a small-cap stock.  The company would not comment on plans for further fundraising, which it will clearly need to fund ASIC development while its still-meager software and card revenues slowly expand.
2017-09-13 00:00 阅读量:1255
  • 一周热料
  • 紧缺物料秒杀
型号 品牌 询价
RB751G-40T2R ROHM Semiconductor
TL431ACLPR Texas Instruments
CDZVT2R20B ROHM Semiconductor
MC33074DR2G onsemi
BD71847AMWV-E2 ROHM Semiconductor
型号 品牌 抢购
ESR03EZPJ151 ROHM Semiconductor
TPS63050YFFR Texas Instruments
STM32F429IGT6 STMicroelectronics
BP3621 ROHM Semiconductor
IPZ40N04S5L4R8ATMA1 Infineon Technologies
BU33JA2MNVX-CTL ROHM Semiconductor
热门标签
ROHM
Aavid
Averlogic
开发板
SUSUMU
NXP
PCB
传感器
半导体
相关百科
关于我们
AMEYA360微信服务号 AMEYA360微信服务号
AMEYA360商城(www.ameya360.com)上线于2011年,现 有超过3500家优质供应商,收录600万种产品型号数据,100 多万种元器件库存可供选购,产品覆盖MCU+存储器+电源芯 片+IGBT+MOS管+运放+射频蓝牙+传感器+电阻电容电感+ 连接器等多个领域,平台主营业务涵盖电子元器件现货销售、 BOM配单及提供产品配套资料等,为广大客户提供一站式购 销服务。