Deep learning is "transforming computing" is the message that Nvidia hopes to hammer home at its GPU Tech conference. On that theme, Nvidia has styled itself as a firebrand, catalyst and deep learning enabler and — in the long run — a deep profiteer.
Among the telltale signs that Nvidia is betting its future on this branch of artificial intelligence (AI) is the recent launch of its “Deep Learning Institute,” with plans increase the number of developers to 100,000 this year. Nvidia trained 10,000 developers in deep learning in 2016.
Over the last few years, AI has made inroads into “all parts of science,” said Greg Estes, Nvidia’s vice president responsible for developer programs. AI is becoming an integral component of “all applications ranging from cancer research, robotics, manufacturing to financial services, fraud detection and intelligent video analysis,” he noted.
Nvidia wants to be known as the first resort for developers creating apps that use AI as a key component, said Estes.
“Deep learning” is in the computer science curriculum at many schools, but few universities offer a degree specifically in AI.
At its Deep Learning Institute, Nvidia plans to deliver “hands-on” training to “industry, government and academia,” according to Estes, with a mission to “help developers, data scientists and engineers to get started in training, optimizing, and deploying neural networks to solve the real world problems in diverse disciplines.”
How can you fit AI into apps?
Kevin Krewell, principal analyst at Tirias Research, told EE Times, “The challenge of Deep Learning is that it’s just hard to get started and figuring out how to fit it into traditional app development.”
He noted, “I think Nvidia is trying to get a wider set of developers trained on how to fit machine learning (ML) into existing development programs. Unlike traditional programs where algorithms are employed to perform a task, ML is a two-stage process with a training phase and a deployment phase.”
Nvidia’s edge is that “Machine learning performs better with an accelerator like a GPU, rather than relying just on the CPU,” Krewell added.
As Nvidia readies its Deep Learning Institute, the company is also entering a host of partnership deals with AI “framework” communities and universities. Among them are Facebook, Amazon, Google (Tensor Flow), the Mayo Clinic, Stanford University and Udacity.
Such collaborations with framework vendors are critical, because every developer working on AI apps needs to have cloud and deep learning resources.
As Jim McGregor, principal analyst at Tirias Research, told us, “The most difficult thing for app developers are the cloud resources and large data sets. As an example, the mobile suppliers are promoting machine learning on their devices, but to develop apps for those devices you need cloud/deep learning resources and the data sets to train those resources, which the mobile players are not providing.”
Nvidia can provide the hardware resources and a mature software model, but “developers still need the service provider and data sets,” McGregor added.
According to Nvidia, the company is also working with Microsoft Azure, IBM Power and IBM Cloud teams to port lab content to their cloud solutions.
在线留言询价
型号 | 品牌 | 询价 |
---|---|---|
CDZVT2R20B | ROHM Semiconductor | |
TL431ACLPR | Texas Instruments | |
MC33074DR2G | onsemi | |
BD71847AMWV-E2 | ROHM Semiconductor | |
RB751G-40T2R | ROHM Semiconductor |
型号 | 品牌 | 抢购 |
---|---|---|
BP3621 | ROHM Semiconductor | |
STM32F429IGT6 | STMicroelectronics | |
IPZ40N04S5L4R8ATMA1 | Infineon Technologies | |
ESR03EZPJ151 | ROHM Semiconductor | |
BU33JA2MNVX-CTL | ROHM Semiconductor | |
TPS63050YFFR | Texas Instruments |
AMEYA360公众号二维码
识别二维码,即可关注