Giving the IoT the Power to Pay Attention on Our Behalf
Research by neuroscientists promises to make attentional mechanisms a core feature of artificial general intelligence, and thus help to make the IoT so much more than commonly thought.
AI is becoming so pervasive, it’s almost going to be floating invisibly in the air we breathe. That’s the concept of “ambient computing,” which sits at the convergence of AI, real-time streaming, mobility, and the Internet of Things.
Attention is a core cognitive process. Paying attention to our environments is what makes human beings truly intelligent. By the same token, attention is also the core capability of any IoT endpoint. By sensing real-time environmental data and (optionally) being able to respond to it with algorithmic intelligence, IoT-equipped material objects can free people from having to attend closely to many things that would otherwise occupy our minds.
Neuroscientists everywhere are prioritizing research into attentional mechanisms as a core feature of the long-sought artificial general intelligence. Research has shown that the brain relies on an organic feedback loop in which attention drives learning: Innate attention mechanisms help it learn the most relevant sensed dimensions in a physical environment, while trial-and-error exploration of those dimensions helps it learn the best strategies for focusing attention.
Sophisticated attention mechanisms are a key to boosting machine learning (ML) models’ ability to learn, adapt, and operate with greater autonomy. Tuning its algorithmic attention is fundamental to helping an ML model to more efficiently extract signal from noise in the data. Consequently, algorithmic approaches to attention are a core focus of AI researchers everywhere. As can be seen from a review of the recent research literature, AI experts are building sophisticated attention mechanisms into the following areas:
Natural language processing (NLP): Attention informs how well and rapidly we extract meaning from language. Many NLP tasks have been improved through advances in attention mechanisms. For example, researchers developed an NLP ML model that uses attentional steps to improve the accuracy of English-to-French machine-translation. While reading and encoding English inputs, a recurrent neural network (RNN) algorithm dynamically shifts its attention to focus on parts of the text immediately surrounding that being translated, thereby significantly outperforming traditional phrase-based translation algorithms in accuracy. Other researchers have built attentional mechanisms to boost performance of ML for document classification, text comprehension, conversational interfaces, and conversational modeling.
Interactive gaming: Maintaining focused attention is critical when you’re engaged in a real-time, interactive activity such as online gaming. Researchers have applied attentional mechanisms to a Google DeepMind algorithm to improve its ability to adaptively learn diverse Atari 2600 games without human intervention. In addition, built-in attention mechanisms help humans to direct online monitoring of the training process by focusing their attentions on regions of the game screen the automated agent is attending to while playing games. Still other researchers have parallelized this attention process so that an automated agent can algorithmically focus on multiple relevant elements of a game in the process of learning how to play it. All of these attention techniques are fundamentally applicable to transfer learning, a hot frontier in AI that is vitally important in equipping bots, agents, and things with the ability to reuse existing algorithmic learning.
Generative design: Creativity often involves focusing on the elements of some scene, picture, image, or design that we find most interesting and then re-applying that in some different context. Attention has also proven useful in AI-driven generative models, for algorithmically generating new text, images, sounds, and other objects from pre-existing data. Researchers have built a CNN-based generative model that uses attention mechanisms to segment an image into semantically meaningful categories, so that, at a pixel level, they can be generatively recombined into entirely new images. Researchers from Google DeepMind use attention-based RNNs to generate images incrementally, attending only to specific parts of the input image and modifying only specific elements of the target image.
All of these advances are directly applicable to IoT-based ambient computing, in which intelligent objects will support AI-driven, environmentally contextualized conversational UIs, computer vision, interactive gaming, and generative tooling.
Attention-based learning will be fundamental for ambient computing, which relies on increasingly adaptive and autonomous sensor grids adapt at distinguishing signal from noise. These approaches will enable every smart speaker to attend to the distinction between voices and background sounds, between separate voices, between separate words and phrases uttered by any given voice, and between shifting intents and sentiments being expressed. Likewise, algorithmically adaptable attention will be essential for AI-powered things to rapidly distinguish objects in their visual, audio, tactile, olfactory, and other sensory fields. Swarming things -- such as drones -- won’t be able to function without the ability to fluidly shift their AI-guided focus to dynamically changing environments and the windows of threat and opportunity they present.
However, the IoT human-machine interface may be where attention mechanisms have their biggest impact on ambient computing. Here’s an excellent research paper from Sweden and the UK on the need for adaptive attention mechanisms to manage “interruptive IoT” scenarios. This refers to use cases in which the IoT devices share attention-related data and non-disruptively guide human users’ organic attention to help them achieve desired outcomes. In other words, if intelligent IoT devices can minimize the need for users to shift their gaze to a gadget’s screen or speak into its mic in order to achieve some end result, an attention algorithm can help AI-imbued devices to fade into the background.
It’s all about finding the right balance of organic and algorithmic attention in every IoT usage scenario. As the paper’s authors state, “We adopt the widespread assumption amongst context-aware system designers that by better identifying the right time, place, and modality, smart IoT environments could potentially transmit more information (i.e. notifications about the status of otherwise imperceivable processes) to human agents without significantly disturbing ongoing tasks.”
That’s the key toward driving ambient computing into the fabric of our interruptible lives while still keeping humans in the loop. It’s all about ensuring that our intelligent devices continue to serve our needs while mitigating the ever-present risks of human inattention in this crazy world.
Jim is Wikibon's Lead Analyst for Data Science, Deep Learning, and Application Development. Previously, Jim was IBM's data science evangelist. He managed IBM's thought leadership, social and influencer marketing programs targeted at developers of big data analytics, machine ... View Full Bio
We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.