What Will Be the Next Big Thing in AI?
What's next in AI? Here are five expert predictions about tomorrow's brave new world.
AI is the most impactful technological advance of our time, transforming virtually every aspect of business, economic, and social life. Moving forward will be a hard act to follow, yet researchers are already planning to take AI to the next level with an array of exciting new innovations.
Are you ready to peek into AI's future? In online interviews, here's what five leading experts predict.
1. Supercharged searches
Generative AI (GenAI) could soon replace traditional search methods as the primary way to find information, says Eric Bradlow, vice dean of AI and analytics at The Wharton School of the University of Pennsylvania. "GenAI not only allows for richer and more specific prompts, but also facilitates the use of multimodal inputs, such as text, voice, and video," he explains. More importantly, it generates responses rather than simply retrieving them. "While search engines locate information, generative AI creates it."
GenAI is already here, and its capabilities will continue to expand, Bradlow predicts. "The scope of tasks that AI can perform is growing, making it an increasingly integral part of everyday life and work."
2. Advanced multimodal models
The next big thing in AI will likely be advanced multimodal models that can seamlessly integrate and process different types of data, including text, images, audio, and video, in more human-like ways, says Dinesh Puppala, regulatory affairs lead at Google. "We're moving beyond models that specialize in one type of data toward AI systems that can understand and generate across multiple modalities simultaneously, much like humans do," he notes.
Advanced multimodal models will enable more natural and context-aware human-AI interactions. "They'll be better at understanding nuanced queries, interpreting visual and auditory cues, and providing more holistic and relevant responses," Puppala predicts. They could, for example, analyze a video of a person speaking, understanding not just the words but also the tone, facial expressions, and body language to grasp the full context of the communication, he says.
In practical applications, advanced multimodal models could lead to more sophisticated virtual assistants, highly accurate content moderation systems, and AI that can generate multimedia content based on complex prompts.
3. Autonomous AI agents
Ilya Meyzin, head of data science at Dun & Bradstreet, anticipates the widespread adoption and advancement of autonomous AI agents. "These are AI systems capable of performing tasks with minimal human intervention," he explains.
A major upgrade from current chatbots, autonomous AI agents can use digital tools to access databases, deploy various software applications, and analyze real-time data. "This allows them to interact with different environments, adapt to new situations, and make a broad range of decisions autonomously," Meyzin says. Current AI agents require some level of human feedback and oversight. "In the future, as agents become even more advanced, they will approach complete autonomy."
Autonomous AI agents will revolutionize various industries by automating complex manual processes, enhancing decision-making, improving efficiency, and reducing cost, Meyzin predicts.
4. Smart robots
When it comes to robotics, Peter Stone, chief scientist and deputy president at Sony AI, says there are three things he's most excited about. First is embodiment, the act of making physical robots more intelligent. Second is causal reasoning, which provides a way for AI models (and robots) to gain a better understanding of the world and what causes things to happen. Third is the amalgamation of strengths from different AI paradigms -- symbolic, neural, and probabilistic. "Probabilistic AI is needed to enable AI agents to reason about likelihoods and contingencies," he notes.
Research related to embodiment will help to bring down the cost of robots and potentially get the technology to a point where the devices can become household appliances other than vacuum cleaners, Stone says.
5. Metacognition arrives
Metacognition in AI -- systems that can think about the way they think -- is on the mind of Isak Nti Asare, co-director of the cybersecurity and global policy program at Indiana University. "This capability, often described as AI self-awareness, is a necessary frontier to cross if we are to build trustworthy systems that can explain their decisions."
Current AI systems, while advanced, often operate as "black boxes" where even their creators cannot fully explain their outputs. "Achieving metacognition in AI would enable these systems to self-reflect, evaluate their processes, and improve performance autonomously," Asare says. "They could provide detailed explanations for their decisions, making their operations transparent and understandable to humans, which is critical in areas like healthcare, finance, and autonomous driving."
Metacognitive AI systems will also be able to identify and correct their own errors without human intervention, adapt to new situations more effectively, and incorporate ethical considerations into their decision-making processes. Asare estimates that such systems could become a reality within the next 10 to 15 years.
About the Author
You May Also Like