A lot has been accomplished in the last year to improve comprehension, accuracy and scalability of artificial intelligence, but 2019 will see efforts focused on eliminating bias and making decision making more transparent.
Jeff Welser, vice president at IBM Research, says the organization has hit several AI milestones in the past year and is predicting three key areas of focus for 2019. Bringing cognitive solutions powered by AI to a platform businesses can easily adopt is a strategic business imperative for the company, he said, while also increasing understanding of AI and addressing issues such as bias and trust.
When it comes to advancing AI, Welser said there’s been progress in several areas, including comprehension of speech and analyzing images. IBM’s Project Debater work has been able to extend current AI speech comprehension capabilities beyond simple question answering tasks, enabling machines to better understand when people are making arguments, he said, and taking it beyond just “search on steroids.” One scenario involved asking a question that had no definitive answer — whether government should increase funding for telemedicine.
Just as it’s critical to get AI to better understand what is being said, progress has been made for it to recognize what it sees faster and more accurately, said Welser. Rather than requiring thousands or possibly millions of labeled images to train a visual recognition model, IBM has demonstrated it’s now possible for AI to recognize new objects with as little as one example as a guideline, which makes AI scalable.
IBM Research AI introduced a Machine Listening Comprehension capability for argumentative content stemming from its work on Project Debater, pictured with professional human debater, Dan Zafrir, in San Francisco. (Photo Credit: IBM Research).
IBM Research AI introduced a Machine Listening Comprehension capability for argumentative content stemming from its work on Project Debater, pictured with professional human debater, Dan Zafrir, in San Francisco. (Photo Credit: IBM Research).
Another way that AI learning is becoming scalable is getting AI agents to learn from each other, said Welser. IBM researchers have developed a framework and algorithm to enable AI agents to exchange knowledge, thereby learning significantly faster than previous methods. In addition, he said, they can learn to coordinate where existing methods fail.
“If you have a more complex task, you don't have to necessarily train a big system," Welser said. "But you could take individual systems and combine them to go do that task.”
Progress is also being made in reducing the computational resources necessary for deep learning models. In 2015, IBM outlined how it was possible to train deep learning models using 16-bit precision, and today 8-bit precision is now possible without compromising model accuracy across all major AI dataset categories, including image, speech, and text. Scaling of AI can also be achieved through a new neural architecture search technique that reduces the heavy lifting required to design a network.
All this progress needs to be tempered by the fact AI must be trustworthy, and Welser said there will be a great deal of focus on this in the next year. Like any technology, AI can be subject to malicious manipulation, so it needs to be able to anticipate adversarial attacks.
Right now, AI can vulnerable to what are called “adversarial examples,” where a hacker might imperceptibly alter an image such to fool a deep learning model into classifying it into any category the attacker desires. IBM Research has made some progress addressing this with an attack-agnostic measure to evaluate the robustness of a neural network and direct systems on how to detect and defend against attacks.
Another conundrum is neural nets tend to be black boxes in that how they come to a decision is not immediately clear, Welser. This lack of transparency is a barrier to putting trust in AI. Meanwhile, it’s also important to eliminate bias as AI is increasingly relied on to make decisions, he said, but it’s challenging.
“Up to now we've seen mostly that people have been just so excited to design AI systems to be able to do things," Wesler said. "Then afterwards they try and figure out if they're biased or if they're robust or if they've got some issue with the decisions they're making.”
在线留言询价
型号 | 品牌 | 询价 |
---|---|---|
MC33074DR2G | onsemi | |
TL431ACLPR | Texas Instruments | |
RB751G-40T2R | ROHM Semiconductor | |
CDZVT2R20B | ROHM Semiconductor | |
BD71847AMWV-E2 | ROHM Semiconductor |
型号 | 品牌 | 抢购 |
---|---|---|
ESR03EZPJ151 | ROHM Semiconductor | |
BU33JA2MNVX-CTL | ROHM Semiconductor | |
BP3621 | ROHM Semiconductor | |
TPS63050YFFR | Texas Instruments | |
STM32F429IGT6 | STMicroelectronics | |
IPZ40N04S5L4R8ATMA1 | Infineon Technologies |
AMEYA360公众号二维码
识别二维码,即可关注