Suddenly, artificial intelligence (AI) is everywhere. For decades, the dream of creating machines that can think and learn like humans seemed like it would be perpetually out of reach, but now artificial intelligence is embedded in the phones we carry everywhere, the websites we use every day and, in some cases, even in the appliances we use around our homes.
The market researchers at IDC have predicted that companies will spend $12.5 billion on cognitive and AI systems in 2017, 59.3% more than they spent last year. And by 2020, total AI revenues could top $46 billion.
In many cases, AI has crept into our lives and our work without us realizing it. A recent survey of 235 business executives conducted by the National Business Research Institute and sponsored by Narrative Science found that while only 38% of respondents thought they were using AI in their workplace, 88% of them were actually using AI-based technologies like predictive analytics, automated reporting and voice recognition and response.
This highlights one of the big issues with artificial intelligence: A lot of people don't really understand what AI is.
Adding more confusion to the mix, researchers and product developers who work in AI throw around a lot of technical terms that can be baffling to the uninitiated. If they don't work directly on AI systems, even veteran IT professionals sometimes have difficulty explaining the differences between machine learning and deep learning or defining what exactly a neural network is.
With those tech pros in mind, we've put together a slideshow that defines 12 of the most important terms related to artificial intelligence and machine learning. These are the AI jargon IT and business leaders are most likely to encounter, and understanding these words can go a long way towards providing a foundational understanding of this burgeoning area of technology.
What is artificial intelligence? In the simplest terms, an artificial intelligence is a machine that can think the way people think.
From the earliest days of computing, machines have been good at performing logical tasks like solving simple math problems. However, other tasks, like carrying on a conversation, identifying whether the animal in picture is a dog or cat, or recognizing whether a person is happy or sad, are much more difficult for computers.
The phrase "artificial intelligence" was first used in reference to these tasks that are easy for humans and difficult for machines at a computer science workshop in 1956. At the conclusion of the workshop, the attendees devoted themselves to figuring out "how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves."
To this day, AI researchers continue to work on the areas outlined by these early AI pioneers. Fields like natural language processing, image recognition and machine learning have become subspecialties within the overall category of AI. Artificial intelligence research has also expanded to encompass other areas, such as social intelligence, creativity, autonomous vehicles, recommendation engines and much more.
Machine learning is a subset of the larger artificial intelligence category. Going back to the proposal from that first artificial intelligence workshop, machine learning is the part of artificial intelligence that focuses on giving computers the ability to "improve themselves" over time as a result of experience. An early computer scientist named Arthur Samuel explained that machine learning enables computers "to learn without being explicitly programmed," and his machine learning definition is frequently quoted.
Computer scientists have come up with a lot of different ways to help computers to learn. For example, they might use supervised or unsupervised learning algorithms to help machines get better at performing tasks over time. Today, we encounter machine learning every time we see a recommendation engine like the ones at Amazon or Netflix that suggest products we might like to buy or movies we might like to watch. Machine learning has also become an important part of big data analytics tools used by enterprises.
Just like machine learning is a subset of artificial intelligence, deep learning is a subset of machine learning. Going back to that workshop definition, deep learning is the part of machine learning that focuses on forming "abstractions and concepts." Deep learning systems ingest large quantities of data and generalize categories and features related to that data through supervised or unsupervised learning.
To understand how this works, consider the problem of teaching a computer to distinguish pictures of cats from pictures of dogs. Programmers could try to come up with a set of rules that explains exactly what a cat is and exactly what a dog is, but even though humans can easily distinguish a cat from a dog, it's really hard to explain that difference using algorithms that a computer can understand. However, a deep learning system can analyze a whole bunch of pictures of animals and come to its own generalizations about what distinguishes a cat from a dog. While the cat-dog example is pretty innocuous, this type of deep learning can also be very controversial, such as the deep learning system that learned to distinguish whether people were gay or straight by looking at pictures of their faces.
Deep learning systems rely on neural networks (which will be defined on a later slide) and GPUs. Short for "graphical processing unit," a GPU is a computer chip that is especially good at processing lots of data in parallel. They were originally designed to handle video and graphics (hence the name), but they are also very good at big data processing and machine learning tasks.
Of all the terms in this slideshow, cognitive computing is the easiest to define. Essentially, it means the same thing as artificial intelligence — it just isn't as scary.
Most of us have seen so many apocalyptic science fiction movies that feature frightening uses of artificial intelligence that the term AI has acquired some negative connotations. To get around that bad impression, marketing teams sometimes use the phrase "cognitive computing" to describe products with AI capabilities. IBM, in particular, likes to use the phrase in reference to its Watson platform. The term cognitive computing doesn't really have an agreed-upon scientific definition; it's just a prettier way to say "artificial intelligence."
Neural networks go by lots of different names: artificial neural network, neural net, deep neural net and other similar terms. All those phrases describe the same thing — a computer system inspired by living brains.
At that 1956 workshop where scientists first discussed artificial intelligence at length, the attendees thought that "every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it." In fact, they thought it would be so easy to create a machine model of a human brain that it would take 10 scientists just two months to accomplish it.
That timeline was more than a little unrealistic, considering that researchers are still working on creating computer brains that function like human brains. However, over the years, computer scientists have made a lot of progress toward that goal. Today, neural networks, using nodes that are roughly analogous to biological neurons, perform many tasks related to computer vision, speech recognition, board game strategy and more.
Supervised and Unsupervised Learning
Within machine learning and deep learning, there are several possible approaches to teaching computers. Two of the most common are supervised and unsupervised learning.
With supervised learning, the computer has a "teacher," a human being (or several human beings) that provides examples. In the cat-dog identification example we have been using, supervised learning would require a person to label a bunch of pictures as either cats or dogs. The computer would then learn from those sample inputs and outputs.
In unsupervised learning, the computer doesn't have any sample data. Instead, the system is asked to find patterns in the data on its own. This technique is useful when looking for hidden insights in big data.
Other common types of machine learning include semi-structured learning, where the system gets partial sample data sets, and reinforcement learning, where the system gets rewards or punishments based on how well it completes assigned tasks.
The dictionary definition for an algorithm is "a process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer." In layman's terms, when we are talking about algorithms, we are talking about processes, usually processes related to math.
When you were in third or fourth grade, you learned the algorithm for long division. You learned a process that involved dividing, multiplying, subtracting and bringing down the next digit.
When we talk about algorithms for AI and machine learning, we're talking about the same kinds of processes — just a lot more complex. For example, Google uses an algorithm (a process based on rules) to determine which websites appear at the top of its search results. In machine learning, systems use many different types of algorithms in order to achieve desired results. Common examples include decision trees, clustering algorithms, classification algorithms or regression algorithms.
Also called a bot or an interactive agent, a chatbot is an artificial intelligence system that uses natural language processing capabilities to carry on a conversation. Today, the most recognizable examples of chatbots are Apple's Siri, Microsoft's Cortana and Amazon's Alexa. However, many different organizations are investing in chatbot technology, and many websites now feature chatbots that can answer technical support questions, help guide customers through a sales process or interact with customers in other ways.
Ideally, a chatbot would be able to answer customers questions as well as a human being could, but so far, chatbot technology is falling short of that mark.
Data mining is all about looking for patterns in a set of data. It identifies correlations and trends that might otherwise go unnoticed. For example, if a data mining application were given Walmart's sales data, it might discover that people in the South prefer certain brands of chips or that during the month of October people will buy anything with "pumpkin spice" in the product name.
Data mining tools don't necessarily have to include machine learning or deep learning capabilities, but today's most advanced data mining software generally does have these features built in.
Natural Language Processing
Natural language processing is an area of artificial intelligence related to understanding and generating speech the way humans usually use it. Computers have always been able to understand programming languages, but understanding regular English or Chinese is much more complicated.
You have probably experienced the evolution of natural language processing with your own use of search engines. In the early days of the Internet, users typed Boolean operators to help them search for keywords. So if you were looking for a slideshow like this one, you might have typed "'artificial intelligence' OR 'machine learning' AND 'terms'" into the search engine. Today, search engines have much better natural language processing capabilities, so you can just type "What is artificial intelligence?" to get a definition, as well as links to resources.
Today, nearly all companies are running analytics on their big data. Predictive analytics is a particular type of analytics that seeks to tell users what's going to happen next. For example, you might feed a predictive analytics system 10 years of sales data from your company and then ask it to forecast your sales for next quarter given the current trends.
Today's predictive analytics systems usually incorporate data mining and machine learning capabilities, and often can viewed as a step toward artificial intelligence. They rely on algorithms to help them process data and determine likely future events.
The Turing Test is named for its inventor, Alan Turing, an early computer scientist who theorized extensively about artificial intelligence. He proposed a simple test to determine whether or not a computer had achieved true artificial intelligence. A human interrogator would type questions, which would then be given to a computer system and a human being. The computer and the human being would then type responses. If the interrogator couldn't tell which response came from the computer and which came from the person, the system would, in Turing's opinion, have attained artificial intelligence.
In recent years, several AI systems have been said to have passed the Turing Test, but the results have always been somewhat controversial. Some people question whether the Turing Test is really a good way to evaluate artificial intelligence, but it remains influential in discussions about AI.Cynthia Harvey is a freelance writer and editor based in the Detroit area. She has been covering the technology industry for more than fifteen years. View Full Bio