If you aren't currently considering how to use deep neural networks to solve your problems, you almost certainly should be, according to Jeff Dean, a Google senior fellow and leader of the deep learning artificial intelligence research project known as Google Brain.
In a keynote address at the Hot Chips conference here Tuesday (Aug. 22), Dean outlined how deep neural nets are dramatically reshaping computational devices and making significant strides in speech, vision, search, robotics and healthcare, among other areas. He said hardware systems optimized for performing a small handful of specific operations that make up the vast majority of machine learning models would create more powerful neural networks.
"Building specialized computers for the properties that neural nets have makes a lot of sense," Dean said. "If you can produce a system that is really good at doing very specific [accelerated low-precision linear algebra] operations, that's what we want."
Of the 14 Grand Challenges for Engineering in the 21st Century identified by the National Academy of Engineering in 2008, Dean believes that neural networks can play an integral role in solving five — including restoring and improving urban architecture, advancing health informatics engineering better medicines and reverse engineering the human brain. But Dean said neural networks offer the greatest potential for helping to solve the final challenge on the NAE's list: engineering the tools for scientific discovery.
"People have woken up to the idea that we need more computational power for a lot of these problems," Dean said.
Google recently began giving to customers and researchers access to the second-generation of its TensorFlow processing unit (TPU) machine-learning ASIC through a cloud service. A custom accelerator board featuring four of the second-generation devices boasts 180 teraflops of computation and 64 GB of High Bandwidth Memory (HBM).
Dean said the devices is designed to be connected together into larger configurations — a "TPU pod" featuring 64 second-generation TPUs, cable of 11.5 petaflops and offering 4 terabytes of HBM memory. He added that Google is making available 1,000 Cloud TPUs for free to top researchers who are committed to open machine learning research.
"We are pretty excited about the possibilities of the pod for solving bigger problems," Dean said.
In 2015, Google released its TensorFlow software library for machine learning to open source with a goal of establishing a common platform for expressing machine learning ideas and systems. Dean showed a chart demonstrating that TensorFlow in just over a year and a half has become far more popular than other libraries with similar uses.
"It's been pretty rewarding to have this rather large community now crop up," Dean said.
The rise of neural networks — which has accelerated greatly over the past five years — has been made possible by tremendous advances in compute power over the past 20 years, Dean said. He added that he actually wrote a thesis about neural networks in 1990. He believed at the time that neural networks were not far off from being viable, needing only about 60 times more compute power than was available then.
"It turned out that what we really needed was about 1 million times more compute power, not 60," Dean said.
在线留言询价
型号 | 品牌 | 询价 |
---|---|---|
BD71847AMWV-E2 | ROHM Semiconductor | |
MC33074DR2G | onsemi | |
CDZVT2R20B | ROHM Semiconductor | |
TL431ACLPR | Texas Instruments | |
RB751G-40T2R | ROHM Semiconductor |
型号 | 品牌 | 抢购 |
---|---|---|
STM32F429IGT6 | STMicroelectronics | |
IPZ40N04S5L4R8ATMA1 | Infineon Technologies | |
ESR03EZPJ151 | ROHM Semiconductor | |
BU33JA2MNVX-CTL | ROHM Semiconductor | |
BP3621 | ROHM Semiconductor | |
TPS63050YFFR | Texas Instruments |
AMEYA360公众号二维码
识别二维码,即可关注