Recently I was asked by my company to develop a presentation for staff on the origins, present state and plausible future outcomes for artificial intelligence. This is in keeping with my position as the global lead for our AI Center of Excellence. And that process led to an exploration of Artificial General Intelligence (AGI), when it might arrive and the implications for better or worse.
New artificial intelligence capabilities appear every day. In a single day just recently, an avid reader would have found articles about how AI might one day help us to predict earthquakes, how wearable AI will amplify human intelligence, how the technology is being used to create new alloys for 3D printing, how it is changing agriculture, and more. On that day, there were at least a dozen such headlines about how AI is transforming industry and society.
All this stems from “narrow” AI, algorithms that, while powerful, are only able to do one thing, such as play chess, determine the probability that an oil drill bit is about to fail, or more intelligently route calls to service center agents.
While narrow AI applications certainly appear intelligent, their functionality is limited to their specific programming. For example, if you ask an AI-powered digital assistant to turn on the lights, the natural language processing algorithm identifies certain keywords such as “lights” and “on” and then responds by turning on the lights. That may appear to be a human-like intelligence, but these systems are only responding to programming. At the end of the day, the digital assistant doesn’t understand what is being said in the way that a person does. In the same way, a chess-playing AI can’t recognize images or direct you from point A to point B.
The goal has long been to develop AI to the point where the machine's intellectual capability is functionally equivalent to a human -- that it learns and thinks much as person. This is artificial general intelligence. AGI does not yet exist, even though this is what was discussed 63 years ago at the famous Dartmouth conference where the term artificial intelligence was originated. As stated in a Smithsonian article, “What the scientists were talking about in their sylvan hideaway was how to build a machine that could think.”
AGI is vastly different from AI today insofar as it will take on more human-like characteristics and can transfer knowledge from one domain to another as needed. In other words, AGI will be able to make connections and learn how to learn, to generalize and acquire new skills the way humans do. In theory, this could lead to an AGI that could carry out any task a human could. This is widely thought of as the Holy Grail in AI. At the very least, an AGI would be able to combine human-like thinking with the mind-boggling speed of computers, leading to advantages such as near-instant recall and millisecond number crunching.
Much like our inability to fully understand how the brain operates, the complexity of developing this technology remains beyond our grasp. There are AI experts who don’t believe AGI will ever be achieved, or at least not for another hundred years or more. Nevertheless, a survey of these experts revealed a median estimate for AGI of 2040. That’s only a single generation into the future.
Many companies are working towards AGI. For example, there are claims that that DeepMind, a division of Google parent Alphabet, has already developed an early form though there are no current meaningful examples in widespread use. At Google I/O, Google’s AI lead, Jeff Dean, stated that they are looking at "AI that can work across disciplines." Will it really take Google or DeepMind or another 20 years or more to develop AGI, or might this be much closer than predicted?
As with all technology, AI arises from the human mind and our collective knowledge. Yet, much of human invention comes from moments of insight, unexpected illumination, enlightenment, genius and even serendipity. While incremental gains may ultimately lead to AGI, it’s the unexpected path that will likely lead to an AGI breakthrough, and the timeline is entirely unpredictable.
Inevitable but not smart
Once AGI exists, what happens to humans? The thought of creating consciousness and advanced intelligence has long been the stuff of nightmares, from Frankenstein to HAL 9000 and the Terminator. As explained by neuroscientist and philosopher Sam Harris, there is an implicit existential danger in such a development. In his TED Talk, he describes how AGI is surely inevitable and that while we may view this as cool, we should be scared.
Harris adds that the AGI future depicted in science fiction movies such as Ex Machina is often seen as fun, engaging, escapist and entertaining. In his view, however, when these plots become real life, the gains we will make with intelligent machines could ultimately destroy us. He warns that we are so far unable to marshal an appropriate emotional response to the dangers ahead. In effect, he says that we are transfixed, like moths drawn to a flame, fascinated by the curious light without thought to the implications of our actions. If, as The New Yorker asks, the arc of the universe bends toward an intelligence sufficient to understand it, will an AGI be the solution, or the end of the [human] experiment?
While AGI may not be far into the future, there are those who disagree. Rodney Brooks, roboticist and co-founder of iRobot, believes this won’t be seen until the year 2300. In arguing that AGI has been delayed, his view is “if AGI is a long way off then we cannot say anything sensible today about what promises or threats it might provide as we need to completely re-engineer our world long before it shows up, and it when it does show up it will be in a world that we cannot yet predict.” There are also those who think AGI will take a different form, that narrow AI will continue to be developed to the point where the collection of algorithms forms Comprehensive AI Services (CAIS) to the point where they will resemble a general intelligence.
Is our species destined for transhumanism?
Ultimately, there’s no way of knowing just when AGI will appear or in what manner. It could take until 2300 or could happen tomorrow with some yet unannounced and seemingly miraculous achievement. One thing that everyone seems to agree upon is the inherent risk to humanity.
That has led Elon Musk to found Neuralink, with plans for an electrode-to-neuron-based brain-computer interface. Juniper Research believes these Brain Machine Interfaces -- devices that connect computers to the brain -- will reach 25.6 million by 2030. Neuralink is hoping to one day build a device with AI that people could access with their thoughts, and ultimately achieve a symbiosis with AI. He has said this would allow humans to reach higher levels of cognition and give them a better shot at competing against AGI. The result will be the next generation of humans, the transhuman. Or perhaps The Borg. In other words, if you can’t beat them, join them.
Gary Grossman is Senior Vice President and Technology Practice Lead, Edelman AI Center of Expertise.