Researchers are mining a largely untapped data source — the signals and noise generated by smart-device sensors — to enable technologies that solve the world’s hardest human-machine interface problems, Intel Fellow Lama Nachman, Director of the Anticipatory Computing Lab, told a keynote audience at SEMI’s MEMS & Sensors Executive Congress 2017 (San Jose, Calif.). The resultant applications will accurately detect emotions and anticipate needs, without requiring a Google-like dossier of user habits, she predicted.
“Technology needs to be more active at understanding the needs of the user,” Nachman said. “To do that, our job at the Anticipatory Computing Lab is to really understand what type of help you need in any situation.”
Reviewing earlier stabs at productivity-enhancing personalized assistants, Nachman praised Apple’s Siri and Amazon’s Alexa because they kept it simple, only offering a helping hand in response to specific user requests, whereas Microsoft’s initial efforts to make ad hoc suggestions to users wound up irritating them at best and breaking their train of thought at worst. She praised Google Now’s ability to make ad hoc suggestions to its users that are actually useful (for the most part). The downside to Google Now is the deep knowledge it needs to mine from users’ habits with respect to browsing, location, email, purchasing, and other behaviors — a collection of data amounting to a dossier on each user.
Instead, Intel’s Anticipatory Computing Lab aims to repurpose the signals and noise produced by the legions of sensors already deployed in smartphones, smart watches and wearables, smart automobiles, and the Internet of Things (IoT) to make ad hoc suggestions that entertain, increase productivity, and even save people’s lives — whether or not they are Intel users — all in real time and without a Google-like secret data bank on user habits.
“Intel is taking all the sensor feeds available now and reinventing the way they can help people with volunteered information that is always relevant to the person, what you are doing, and what goals you are trying to achieve,” Nachman said. “But to do so, there is a very large set of capabilities that we need to understand, such as emotions, facial expressions, nonverbal body language, personal health issues, and much, much more.”
Many of these personal parameters can be gleaned from the normal usage of the sensors built into our smartphones, wearables, and IoT devices — for instance, facial expressions from a smartphone’s user-facing camera or the volume of a user’s voice. The sensor data can be fused with smart-watch data on pulse rate, activities, location, and more to anticipate a user’s actions and needs with unprecedented accuracy, according to Nachman.
“To understand emotions ‘in the wild,’ so to speak, it is essential to understand, for instance, when you are angry. Even if you are not cursing or yelling, your computer should understand when you are pissed off,” said Nachman. “Physical factors like breathing fast can be seen by a user-facing smartphone camera, fast heart rate can be measured by your smart watch, but we need to fuse that with facial expressions and a deeper understanding of how individuals behave.”
Besides the aforementioned applications, Intel is pursuing such technologies as caring for elderly people or those with disabilities in real time. Indeed, just about everyone can benefit in some fashion from the “guardian angel” model. For instance, Nachman admitted to be a serial food burner, especially when she prepares large spreads for parties. “I need my computer to help me stop burning things,” she said. “Mechanics, repairmen, and even surgeons need their computers to tell them when they have left a tool inside the location they are repairing before they close it up.”
Another essential, according to Nachman, is the perfection of adaptive, personalized learning that engages each user, especially children, in the optimal way for them. Likewise, she claimed that autonomous vehicles need to keep track of what is happening to the people inside the car as well as the environment around the car. “You especially need to understand how comfortable the driver of the car is when he releases control to the autopilot, [gauging] the anxiety level. You also need to keep track of the activities in which the people in the car are engaging, at least insofar as [the activity] affects the occupants’ safety.”
Noise is the signal
Nachman said OEMs and microelectromechanical systems (MEMS) sensor makers are ignoring the noise produced by sensors and actuators, and as a result they are automating functions, such as smartphone camera settings, over which users might want more control. “There is a lot of noise in the environment, but sometimes that is the signal you want to identify,” she said.
Smartphone cameras automatically adjust exposure and focus, for instance, assuming users are only interested in the foreground. But what if the photographer wants to focus on the criminal lurking in the background? Current smartphones let you touch the part of the scene you want to be exposed properly, but they invariably switch back to the foreground without continuous taps on the background. MEMS sensor makers should, by default, allow users to disable all automatic functions, according to Nachman.
Nachman cited another product category in which the default settings should not be automatic: RFID tags, which are both sensors and actuators. Ordinarily, RFID readers ping the RFID tag with an RF signal, which is harvested and used to actuate a return signal that identifies the product on which the tag is mounted. In her lab, Nachman demonstrated that by analyzing the noise that results when a person stands between the reader and the tag, it is possible to infer customer engagement. “We found that the noise of the human interrupting the RFID ping could be used to find out which item on a shelf is being touched, which one the buyer is interested in the most, and other useful facts for retailers,” she said.
Other examples given by Nachman include extracting a person’s respiration and heart rates from the noise in the reflection of wireless signals already saturating the environment from everybody else’s smartphones. The lab has also experimented with putting smart nose pads on a person’s glasses that could render a noisy nasal version of the user’s voice. Using signal processing to remove the nasal noise yields clear voice signals without the use of a microphone.
The lab found that the noise from the ubiquitous magnetometers in smartphones, wearables, and IoT devices could be mined for a variety of contextual data, for instance whether a person is sitting; standing; walking; exercising; biking; or riding in a car, bus, or airplane. The noise from a smart watch can reveal whether the wearer is talking on the phone, moving a mouse, pushing a button, or stapling. Gyroscope noise can be used to tell, from just a single finger touch, whether a person is intending to point or zoom, thus obsoleting pinch-to-zoom and allowing simultaneous zooming and clicking with one hand.
Blood pressure measurements — a capability Apple promised in its initial buildup for watches but failed to deliver — can be taken from the noise generated between a smartphone’s two cameras as the phone is pressed against the user’s skin. The camera can also use the noise in a user-facing camera to measure pupil dilation and thereby infer whether the person is drowsy, anxious, or something in between.
The Anticipatory Computing Lab even developed a way to allow Stephen Hawking to control everything he does with the movement of a single cheek muscle. That movement contains a significant noise element, namely how much control Hawking has over that muscle from day to day (which varies wildly).
There is also a power-savings component to the research. For example, “people have been thinking up all sorts of GPS applications since it became ubiquitous,” Nachman noted, since GPS “burns up a lot of battery power that the sensor makers never anticipated. Even worse is how to keep the power consumption down for always-on sensors, which must able to sense the intended signals plus the noise in between them, and decide when to turn on the application processor, all while keeping power consumption low.”
The answers, according to Nachman, are to accelerate the pace of innovation without increasing power consumption by virtue of more configurable smart sensors that know when to turn on the application processor, as well as when to sense noise they were not originally envisioned to sense.
在线留言询价
型号 | 品牌 | 询价 |
---|---|---|
BD71847AMWV-E2 | ROHM Semiconductor | |
TL431ACLPR | Texas Instruments | |
RB751G-40T2R | ROHM Semiconductor | |
MC33074DR2G | onsemi | |
CDZVT2R20B | ROHM Semiconductor |
型号 | 品牌 | 抢购 |
---|---|---|
BP3621 | ROHM Semiconductor | |
ESR03EZPJ151 | ROHM Semiconductor | |
TPS63050YFFR | Texas Instruments | |
IPZ40N04S5L4R8ATMA1 | Infineon Technologies | |
BU33JA2MNVX-CTL | ROHM Semiconductor | |
STM32F429IGT6 | STMicroelectronics |
AMEYA360公众号二维码
识别二维码,即可关注