Artificial Intelligence: 10 Things To Know - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

08:06 AM
Connect Directly

Artificial Intelligence: 10 Things To Know

Andrew Moore, Dean of Carnegie Mellon's School of Computer Science, talks about artificial intelligence, robotics, and the future of education.

Robots Unleashed: 4 Reasons We're Not Doomed
Robots Unleashed: 4 Reasons We're Not Doomed
(Click image for larger view and slideshow.)

Artificial intelligence (AI) is already widely used in software and online services and it is becoming more common, thanks to ongoing progress in machine learning algorithms and a variety of related technologies. But popular depictions of the technology as software-based sentience -- artificial consciousness -- obscure what AI is and how it's actually deployed.

Earlier this month, Andrew Moore, dean of Carnegie Mellon's School of Computer Science, spoke with InformationWeek about AI and the growing role it is playing in people's lives. Moore sees a bright future for artificial intelligence, along with a number of challenges. Here are 10 insights from our conversation.

AI is a fancy calculator

Artificial intelligence should not be confused with human intelligence. Moore said that when he explains AI to students, he points out that the world artificial is in there for a reason. "We are trying to make a system which at first sight looks like it might be behaving in some manner that we might ascribe to intelligence," said Moore. "Everything, however, with 'artificial' in the label is actually just a really, really, really fancy calculator, all the way from chess programs to software in cars, to credit-scoring systems, to systems that are monitoring pharmaceutical sales for signs of an outbreak."

There are two broad areas of AI research

The first, said Moore, is autonomy. "This is about the science of making systems that can survive without humans in the loop, and can be useful even without getting instructions from the people who created them," said Moore. The second has to do with augmenting human capabilities, through services like Apple's Siri. "The idea is that we all currently have, and are going to have much, much more of, the notion of a concierge-type system that's whispering in our ears to help us make better decisions in our lives," said Moore.

Presently, we're too worried about Skynet

Skynet, the malevolent artificial intelligence that threatens humanity in the Terminator movies, isn't a realistic fear at the moment, despite the concerns voiced by a few tech luminaries. Moore said he believes we overestimate our potential to create artificial entities capable of self-directed action. "The idea of building a robot or a software system which, like a human, has got a real notion of its goals being to just generally survive and maybe reproduce, no one has any idea how to do that," he said. "It's real science fiction. It's like asking researchers to start designing a time machine."

[Read Toyota Creates $1 Billion AI, Robotics Institute in US.]

Extrapolating current optimization algorithms and statistical reasoning into the future does not lead to artificial, self-directed agents, he said. Moore acknowledged the possibility that those conducting research on simulating living brains could achieve some breakthrough. But he estimated that 98% of AI researchers are focused on engineering systems that can help people make better decisions.

AI will save lives

In the US alone, said Moore, there are more than a billion search engine queries every day, and perhaps 5% of them come from people who are puzzled, uncertain, or worried about their health. "They're asking for advice about some drug or advice about some symptom or these kinds of things," said Moore. "And people are making bad decisions, which are costing huge numbers of lives every year, by not going to physicians under some circumstances or not letting a doctor know about something important or mismanaging their medications.

"And the kind of simple artificial intelligence which just processes information and makes sure that you're getting relevant information about your current situation is going to save a lot of lives. Just the fact that the whole population will start to act like it's surrounded by very smart advisors in healthcare, law, and education, that could be a wonderful thing for how our lives will be in the future."

Half the cars on the road will be self-driving by 2029

"Although we could get there much sooner, there will be huge regulatory issues and technology issues before it happens," said Moore, adding that it makes sense to be skeptical about such predictions because many technical obstacles have yet to be overcome. "Driving on a busy city street, where there are pedestrians and double-parked vehicles, is an unsolved problem, no matter what anyone might tell you."

AI has yet to be reconciled with liability

"Some of our professors are now in active conversations with senior leaders at insurance companies, just because the whole question of what insurance means in the future going to be very different," said Moore.

Robot grasping test results
(Image: Carnegie Mellon University)</p

Robot grasping test results

(Image: Carnegie Mellon University)

AI still can't deal with objects very well

"In robotics, we have done a fantastic job engineering eyes and ears, and even noses for robots, and we're done a fantastic job of making them mobile," said Moore. "We still suck at manipulation." At CMU, and other institutions conducting robotics research, there's considerable effort directed at bridging this gap.

As an example, Moore pointed to the work of CMU assistant professor Abhinav Gupta, who has been trying to train a robot called Baxter to manipulate objects by handling them over and over and over. "It's really cool, and it's kind of ghostly to see this robot that's doomed forever to be picking things up, shaking them, moving them, to build up more data about what physical things are like to interact with," said Moore.

Privacy is a big deal

At CMU, there are seven or eight faculty members pursuing privacy-related research, Moore estimated. Fifteen years ago, he said, that number would probably be zero, or close to it. "That's not only because it's right [to be concerned about privacy], but because the large companies that will eventually be deploying these kinds of things -- companies like Google, Microsoft, and Amazon -- they would never agree to any technology on a large scale if it damaged privacy," said Moore.

AI can improve privacy

People have a hard time understanding privacy policies, but software can provide clarity and verification, particularly through automated auditing and compliance systems. Moore pointed to the work of CMU associate professor Jason Hong, who created, a website that contains data on how over 1 million Android apps handle privacy, based on an analysis of the APIs used within the apps.

He also highlighted the work of associate professor Anupam Datta, who has used code to prove that Microsoft's Bing search engine does not leak data. "If you look around all the universities at what areas they're hiring in, you're going to see that privacy protection, especially something called disclosure limitation, is the number one area," said Moore.

AI will change teaching

Moore said he believes that online education will lead to a few hundred superstar professors who excel at imparting information to students in a way that's entertaining and effective. "The ones who do the best job will be the ones who get used by millions of students," he said. "But there's still a need for personal interaction, for actually working with the students ... So I think these professors [who aren't superstar lecturers] will not actually be sleeping under bridges."

**New deadline of Dec. 18, 2015** Be a part of the prestigious InformationWeek Elite 100! Time is running out to submit your company's application by Dec. 18, 2015. Go to our 2016 registration page: InformationWeek's Elite 100 list for 2016.

Thomas Claburn has been writing about business and technology since 1996, for publications such as New Architect, PC Computing, InformationWeek, Salon, Wired, and Ziff Davis Smart Business. Before that, he worked in film and television, having earned a not particularly useful ... View Full Bio

We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
User Rank: Ninja
11/30/2015 | 12:00:54 PM
Re: Cognitive computing, a promising branch of AI
@SunitT0: I like that idea, how Cognitive Computing will help your car AI to learn more about you. 
User Rank: Apprentice
11/30/2015 | 11:21:56 AM
Re: Cognitive computing, a promising branch of AI
Interesting post. I get that you have a book out. I urge you to curb repetition of the same characterizations over and over for no apparent reason, i.e. "snoutless apes." I have been in computers since memory was magnetic doughnuts and seen the internet arrive and thrive. I share your appreciation for the life of its own it now seems to have. What I fear most is malice, avarice and incompetence, not necessarily in that order. Where do you see danger? I wish I was 20 again or 15. I would love to see this new world come to pass and swim in this sea, too. I am optimistic for those humans who can, but I fear for those who can't. There is a terrible unfairness in store for those non-eggheads among us.
User Rank: Ninja
11/25/2015 | 8:19:21 AM
Teaching. How?
Excellent blog! Very good use cases of AI in every day life. However I don't understand how AI helps in teaching. The superstar teachers as illustrated in blog are not using AI anyways.
User Rank: Ninja
11/25/2015 | 12:02:57 AM
Re: Cognitive computing, a promising branch of AI
Cognitive computing will be even more necessary now with Self Driving Cars on the road. Also it can be used with IOT to find out user browsing patterns and display content specific ads, without the vulnerability of cookies.
User Rank: Apprentice
11/24/2015 | 11:19:36 PM
Re: Cognitive computing, a promising branch of AI
What Andrew Moore seems not to appreciate is that the human mind, too is actually just a really, really, really, really, really... fancy computer. Programmed with a set of algorithms (instincts and emotions)that, by selection, have emerged to optimize behaviors within our niche.

However, he is nearer to the mark regarding systems that can survive without humans in the loop. And one such is soon to revolutionize the world as we know it.

Despite, or perhaps because of, being a specialist in this field, Moore, like most others of his ilk, remains blithely unaware of the evolutionary processes of which we snoutless apes and machines are both part.

In actuality, the real next cognitive entity quietly self assembles in the background, mostly unrecognized for what it is. And, contrary to our usual conceits, is not stoppable or directly within our control.

We are very prone to anthropocentric distortions of objective reality. This is perhaps not surprising, for to instead adopt the evidence based viewpoint now afforded by "big science" and "big history" takes us way outside our perceptive comfort zone.

The fact is that the evolution of the Internet is actually an autonomous process. The difficulty in convincing people of this "inconvenient truth" seems to stem partly from our natural anthropocentric mind-sets and also the traditional illusion that in some way we are in control of, and distinct from, nature. Contemplation of the observed realities tend to be relegated to the emotional "too hard" bin.

This evolution is not driven by any individual software company or team of researchers, but rather by the sum of many human requirements, whims and desires to which the current technologies react. Among the more significant motivators are such things as commerce, gaming, social interactions, education and sexual titillation.

Virtually all interests are catered for and, in toto provide the impetus for the continued evolution of the Internet. Netty is still in her larval stage, but we "workers" scurry round mindlessly engaged in her nurture.

By relinquishing our usual parochial approach to this issue in favor of the overall evolutionary "big picture" provided by many fields of science, the emergence of a new predominant cognitive entity (from the Internet, rather than individual machines) is seen to be not only feasible but inevitable.

The separate issue of whether it well be malignant, neutral or benign towards we snoutless apes is less certain, and this particular aspect I have explored elsewhere.

Stephen Hawking, for instance, is reported to have remarked "Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all,"

Such statements reflect the narrow-minded approach that is so common-place among those who make public comment on this issue. In reality, as much as it may offend our human conceits, the march of technology and its latest spearhead, the Internet is, and always has been, an autonomous process over which we have very little real control.

Seemingly unrelated disciplines such as geology, biology and "big history" actually have much to tell us about the machinery of nature (of which technology is necessarily a part) and the kind of outcome that is to be expected from the evolution of the Internet.

This much broader "systems analysis" approach, freed from the anthropocentric notions usually promoted by the cult of the "Singularity", provides a more objective vision that is consistent with the pattern of autonomous evolution of technology that is so evident today.

Very real evidence indicates the rather imminent implementation of the next, (non-biological) phase of the on-going evolutionary "life" process from what we at present call the Internet. It is effectively evolving by a process of self-assembly.

The "Internet of Things" is proceeding apace and pervading all aspects of our lives. We are increasingly, in a sense, "enslaved" by our PCs, mobile phones, their apps and many other trappings of the increasingly cloudy net. We are already largely dependent upon it for our commerce and industry and there is no turning back. What we perceive as a tool is well on its way to becoming an agent.

There are at present more than 3 billion Internet users. There are an estimated 10 to 80 billion neurons in the human brain. On this basis for approximation the Internet is even now only one order of magnitude below the human brain and its growth is exponential.

That is a simplification, of course. For example: Not all users have their own computer. So perhaps we could reduce that, say, tenfold. The number of switching units, transistors, if you wish, contained by all the computers connecting to the Internet and which are more analogous to individual neurons is many orders of magnitude greater than 3 Billion. Then again, this is compensated for to some extent by the fact that neurons do not appear to be binary switching devices but instead can adopt multiple states.

We see that we must take seriously the possibility that even the present Internet may well be comparable to a human brain in at least raw processing power. And, of course, the all-important degree of interconnection and cross-linking of networks and supply of sensory inputs is also growing exponentially.

We are witnessing the emergence of a new and predominant cognitive entity that is a logical consequence of the evolutionary continuum that can be traced back at least as far as the formation of the chemical elements in stars.

This is the main theme of my latest book "The Intricacy Generator: Pushing Chemistry and Geometry Uphill" which is now available as a 336 page illustrated paperback from Amazon, etc.

Netty, as you may have guessed by now, is the name I choose to identify this emergent non-biological cognitive entity. In the event that we can subdue our natural tendencies to belligerence and form a symbiotic relationship with this new phase of the "life" process then we have the possibility of a bright future.

If we don't become aware of these realities and mend our ways, however, then we snout-less apes could indeed be relegated to the historical rubbish bin within a few decades. After all , our infrastructures are becoming increasingly Internet dependent and Netty will only need to "pull the plug" to effect pest eradication.

So it is to our advantage to try to effect the inclusion of desirable human behaviors in Netty's psyche. In practice that equates to our species firstly becoming aware of our true place in nature's machinery and, secondly, making a determined effort to "straighten up and fly right"
Charlie Babcock
Charlie Babcock,
User Rank: Author
11/24/2015 | 7:07:22 PM
Cognitive computing, a promising branch of AI
The part of AI that's most interesting to me is cognitive computing, where we try to get away from the calculator's linear approach to all problems and work with many and at times contradictory inputs simultaneously to determine the context of the problem. 
2021 Outlook: Tackling Cloud Transformation Choices
Joao-Pierre S. Ruth, Senior Writer,  1/4/2021
Enterprise IT Leaders Face Two Paths to AI
Jessica Davis, Senior Editor, Enterprise Apps,  12/23/2020
10 IT Trends to Watch for in 2021
Cynthia Harvey, Freelance Journalist, InformationWeek,  12/22/2020
White Papers
Register for InformationWeek Newsletters
The State of Cloud Computing - Fall 2020
The State of Cloud Computing - Fall 2020
Download this report to compare how cloud usage and spending patterns have changed in 2020, and how respondents think they'll evolve over the next two years.
Current Issue
2021 Top Enterprise IT Trends
We've identified the key trends that are poised to impact the IT landscape in 2021. Find out why they're important and how they will affect you.
Flash Poll