Baidu updated its open-source benchmark for neural networks, adding support for inference jobs and support for low-precision math.DeepBench provides a target for optimizing chips that help data centers build larger and, thus, more accurate models for jobs such as image and natural-language recognition.
The work shows that it’s still early days for neural nets. So far, results running the training version of the spec launched last September are only available on a handful of Intel Xeon and Nvidia graphics processors.
Results for the new benchmark on server-based inference jobs should be available on those chips soon. In addition, Baidu is releasing results on inference jobs run on devices including the iPhone 6, iPhone 7, and a Raspberry Pi board.
Inference in the server has longer latency but can use larger processors and more memory than is available in embedded devices like smartphones and smart speakers. “We’ve tried to avoid drawing big conclusions; so far, we’re just compiling results,” said Sharan Narang, a systems researcher at Baidu’s Silicon Valley AI Lab.
At press time, it was not clear whether Intel would have inference results for the release today, and it is still working on results for its massively parallel Knights Mill. AMD expressed support for the benchmark but has yet to release results running it on its new Epyc x86 and Radeon Instinct GPUs.
A handful of startups including Corenami, Graphcore, Wave Computing, and Nervana — acquitted by Intel — have plans for deep-learning accelerators.
“Chip makers are very excited about this and want to showcase their results, [but] we don’t want any use of proprietary libraries, only open ones, so these things take a lot of effort,” said Narang. “We’ve spoken to Nervana, Graphcore, and Wave, and they all have promising approaches, but none can benchmark real silicon yet.”
The updated DeepBench supports lower-precision floating-point operations and sparse operations for inference to boost performance.
“There’s a clear correlation in deep learning of larger models and larger data sets getting better accuracy in any app, so we want to build the largest possible models,” he said. “We need larger processors, reduced-precision math, and other techniques we’re working on to achieve that goal.”
在线留言询价
型号 | 品牌 | 询价 |
---|---|---|
TL431ACLPR | Texas Instruments | |
MC33074DR2G | onsemi | |
CDZVT2R20B | ROHM Semiconductor | |
BD71847AMWV-E2 | ROHM Semiconductor | |
RB751G-40T2R | ROHM Semiconductor |
型号 | 品牌 | 抢购 |
---|---|---|
BP3621 | ROHM Semiconductor | |
TPS63050YFFR | Texas Instruments | |
STM32F429IGT6 | STMicroelectronics | |
ESR03EZPJ151 | ROHM Semiconductor | |
BU33JA2MNVX-CTL | ROHM Semiconductor | |
IPZ40N04S5L4R8ATMA1 | Infineon Technologies |
AMEYA360公众号二维码
识别二维码,即可关注