Answering the White House's call for input on artificial intelligence, IBM argues the technology should be designed to assist people rather than replace them.
10 Reasons Your Data Vision Will Fail
(Click image for larger view and slideshow.)
In response to a White House request for information about how to utilize artificial intelligence (AI) for the public good, IBM argues we should focus on a different sort of AI, augmented intelligence.
In May, the White House Office of Science and Technology Policy announced a series of workshops focused on advances in AI, to explore the benefits and challenges of the technology, while also committing to the deployment of AI to improve government services.
AI has made great strides in the past few years, after decades of unfulfilled promise. It's hard to find a major technology company today that isn't looking at AI-related disciplines like machine learning, natural language processing, image recognition, and neural networks as potential sources of growth, efficiency, and innovation.
Everyone working with information technology, if not already dealing with some form of AI, can expect to be doing so soon. Many organizations already rely on AI without realizing it.
Two months after the White House put out its call for input, noting that "AI carries risks and presents complex policy challenges along a number of different fronts," the risks became clear when a Tesla Model S under the control of the vehicle's semi-autonomous Autopilot system crashed into a semitrailer on a Florida highway, killing the car's driver.
Tesla CEO Elon Musk has suggested that the car's vision system failed to distinguish between the side of the semitrailer and the similarly colored sky.
Tesla warns drivers that Autopilot is intended as feature that augments human driving rather than replaces the need for it. That's how IBM sees such technology too, though the fact that augmented intelligence and artificial intelligence look identical when abbreviated ensures ongoing confusion about AI.
Toward the end of its response to the government, the company suggests the term intelligence augmentation (IA), which avoids the suggestion that the technology is intended as a substitute for human involvement. But the letter reversal may take a while to catch on.
IBM points to the 2011 Jeopardy! victory of its Watson system over two human contestants as the point at which the public awoke to the potential of AI, though other events like the 1997 chess victory of IBM's Deep Blue over Garry Kasparov and the appearance of HAL in 2001: A Space Odyssey qualify as milestones, too.
IBM says it is focused on augmented intelligence, systems that enhance human capabilities, rather than systems that aspire to replicate the full scope of human intelligence.
That appears to be the case with most AI research. In an InformationWeek interview last year, Andrew Moore, Dean of Carnegie Mellon's School of Computer Science, estimated that 98% of AI researchers are focused on engineering systems that can help people make better decisions rather than simulating human consciousness.
IBM says it sees AI helping doctors make sense of medical data and patient information; helping citizens get answers about insurance, taxes, and social programs; helping students and educators learn and teach more effectively; helping financial firms make better decisions about risk and fraud; and helping solve public safety, environmental, and infrastructure challenges.
For IBM, AI is about opportunity. The firm sees the technology as a source of "higher productivity, higher earnings, and overall job growth."
But the term "overall" here masks the technology's potential to disrupt society. Massive job growth that coincides with and outpaces massive job losses may represent an economically positive picture when viewed as a mathematical sum, but it's bound to be socially problematic for those seeing their jobs vanish without a clear, quick, affordable path back to employability.
IBM also acknowledges that AI must be trustworthy. The company argues that people will develop trust as they interact with AI systems over time, as they have done with ATMs. The key, the company suggests, will be ensuring that systems behave as we expect them to.
"But trust will also require a system of best practices that can guide the safe and ethical management of AI; a system that includes alignment with social norms and values; algorithmic accountability; compliance with existing legislation and policy; and protection of privacy and personal information," the company says, noting that it's in the process of developing systems of this sort with its partners, university researchers, and competitors.
IBM urges government officials to engage in fact-based dialogue about what AI can and cannot do to develop policies that utilize AI for the public good, to support relevant educational and workforce training programs, and to support AI-related research.
AI, IBM concludes, represents a partnership between people and machines, one that may alter the job landscape without eliminating jobs overall. The partnership comes with risks, the company says, but contends that the risks can be managed and mitigated.
(Cover image: DKart/iStockphoto)
Thomas Claburn has been writing about business and technology since 1996, for publications such as New Architect, PC Computing, InformationWeek, Salon, Wired, and Ziff Davis Smart Business. Before that, he worked in film and television, having earned a not particularly useful ... View Full Bio
We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.