08:00 AM
Jessica Davis
Jessica Davis
Connect Directly

7 Technologies You Need to Know for Artificial Intelligence

Artificial intelligence is actually a term that encompasses a host of technology and tools. Here's a closer look at some of the more important ones.

Artificial Intelligence. Everybody wants it. Everybody knows they need to invest in pilots and initial projects. Yet getting those projects into production is hard, and most companies still aren't in with both feet.

If you aren't hands on with the projects yourself, you may have heard a lot of different terminology. You may be wondering what it all means. Is AI the same as machine learning? Is machine learning the same as deep learning? Do you need them all? Sometimes the first steps of understanding whether a technology is a fit for your organization's challenges and problems is understanding the basic terminology behind that technology.

Let's start with a basic definition of artificial intelligence. The term means a lot of things to a lot of different people, from robots coming to take your jobs to the digital assistants in your mobile phone and home -- Alexa, Siri, and the rest. But those who work with AI know that it is actually a term that encompasses a collection of technologies that include machine learning, natural language processing, computer vision, and more.

Artificial intelligence can also be divided into narrow AI and general AI. Narrow AI is the kind we run into today -- AI suited for a narrow task. This could include recommendation engines, navigation apps, or chatbots. These are AIs designed for specific tasks. Artificial general intelligence is about a machine performing any task that a human can perform, and this technology is still really aspirational.

With AI hype everywhere today, it's time to break down some of the more common terms and technologies that make up AI, and a few of the bigger tools that make it easier to do AI. Take a look through the terms and technologies you need to know -- some components that make up AI, and a few of the tools to make them work.

Image: besjunior -
Image: besjunior -

Machine Learning

Machine learning may be the first step for many organizations that are adding AI-related technologies to their IT portfolio. automates the process of creating algorithms by using data to "train" them rather than human software developers writing code. Basically, what you are doing is showing the algorithm examples, in the form of data. The famous example is a cat. What is a picture of a cat and what is not a picture of a cat. Can you train a machine to recognize a cat by showing it examples of both cat and not cat? By "looking" at all these examples, the machine learns to recognize the difference.

Organizations are getting more adept at machine learning, and there are plenty of use cases to consider. For instance, NewYork-Presbyterian Hospital was using machine learning for cybersecurity and then recognized that it could use the same techniques to help it fight the opioid crisis.

Deep Learning

What's the difference between machine learning and deep learning?  Deep learning takes machine learning a few steps further by creating layers of machine learning beyond the first decision point. These hidden layers are called a neural network and are meant to simulate the way human brains operate. Deep learning works by taking the outcome of the first machine learning decision and making it the input for the next machine learning decision. Each of these is a layer.

Python is also the language of deep learning and neural networks. If you are looking for more about deep learning and neural networks, check out our coverage here and here.

Natural Language Processing

Human language doesn't speak in zeros and ones, but there's a lot of benefit and productivity that can be gained when machines are taught to understand human language. That's the goal of natural language processing. Early efforts at this include pieces of the digital assistants like Alexa, Microsoft Cortana, Google Assistant, Siri, and a host of others that are hitting the market now. But it's tricky. Language doesn't follow mathematical rules.

NLP is also essential when it comes to working with many types of unstructured data such as the data in electronic health records (EHR), emails, text messages, transcripts, social media posts -- anything with a language component. It's through NLP that we can get to more advanced technologies such as sentiment analysis

Computer Vision

Computer vision is about recognizing images the way humans would recognize them -- not just as a collection of pixels but as what the pixels represent in the real world. One of the big areas of work for computer vision today is in facial recognition. There are cameras everywhere you look. Those cameras are capturing images of people and the computer vision systems behind them are working to identify those people based on mappings of faces and tagged previous images identified as a particular person. This technology has grown more advanced and easier to implement over the last few years. For instance, Microsoft introduced Face API for Azure and AWS introduced Amazon Rekognition in 2017. More recently, both Microsoft and Amazon have called for governments to start regulating face recognition technology.

TensorFlow and Keras

TensorFlow is an open-source platform for machine learning works with an ecosystem of tools, libraries and community resources to help organizations more quickly and easily build and deploy machine learning-powered applications with support for high-level APIs, such as Keras. Keras is a high-level neural networks API, written in Python that can run on top of Tensorflow, Microsoft's CNTK or Theano. TensorFlow technology was initially created by Google Brain for internal use, but then released to open source. Among the many places these technologies are being used is in Monterey, California to better understand sharks.

Jupyter Notebook

Named for three core programming language supported by Project Jupyter -- Julia, Python, and R -- this technology is a web browser-based interactive environment for data scientists and machine learning developers that enables them to create and share documents that contain live code, equations, visualizations and text. It can be used for data cleaning and transformation, numerical simulation, statistical modeling, data visualization, machine learning, and more.

Jessica Davis has spent a career covering the intersection of business and technology at titles including IDG's Infoworld, Ziff Davis Enterprise's eWeek and Channel Insider, and Penton Technology's MSPmentor. She's passionate about the practical use of business intelligence, ... View Full Bio

We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Email This  | 
Print  | 
More Insights
Copyright © 2020 UBM Electronics, A UBM company, All rights reserved. Privacy Policy | Terms of Service