The topics of machine learning and, particularly, deep learning are clearly among the hottest topics covered by many tech publications. While the amount of hype is not insignificant, there are many good reasons why the space deserves substantial attention and coverage. To name a few:
The field of deep learning is evolving rapidly and in many dimensions. There are many new technologies, architectures, and algorithms being proposed, each offering unique value. However, I believe there are three main macro trends that will become true game changers in ML in the years to come:
Emergence of unsupervised learning. The first and the most important macro trend in ML/DL is a gradual shift from supervised to an unsupervised learning paradigm.
The great majority of legacy ML/DL implementations are supervised learners. In other words, they can only be useful if they are trained by large amounts of labeled training data. While supervised learners have served us well, gathering and labeling large datasets are time consuming, expensive, and prone to errors. These challenges become far more pronounced when the size of the datasets increases. Unsupervised learners, on the other hand come with a huge advantage because they don’t require large training datasets and they learn as they go. This should explain why much of the advanced research in ML has to do with unsupervised learning.
Generative adversarial networks (GANs). The prerequisite for learning the fundamentals of GANs is to understand the difference between generative and discriminative models. Discriminative models are those that are trained using labeled historical data and use their accumulated knowledge to infer, predict, or categorize.
Consider an image recognition model that can identify the make and model of various cars. Such models are typically trained by a set of pre-identified car images and learn to associate various features (such as size, height, dimensions, and ratios) to a specific brand and model. Once trained, the model will be able to analyze new, incoming, unlabeled images and can associate them to a specific car brand/year.
Generative models, on the other hand, work differently, and are tasked to synthesize or generate new outcomes based on accumulated insights gained during training. In the context of cars, imagine that a generative model is tasked to create a brand new car concept after they are trained by an unlabeled datasets (images of various cars that are not identified). The generative model uses the training images to learn the distinct characteristics of a car category (such as sports cars, SUVs, and sedans) and uses this insight to come up with a new car concept that shares the features of that category. To be more precise, a well-trained generative model will not propose a new truck concept with a front-end that resembles a sports car.
So what are Generative Adversarial Networks (GANs) and how do they fit in the big picture? GANs are really not a new model category; they are simply an extremely clever and effective way of training a generative model. This strength reduces the need for large training datasets.
GANs typically are constructed using two neural networks that act as adversaries. One generates false samples that closely resemble a valid sample. The other network (discriminator) receives a stream of training samples mixed with occasional false samples from the generator and is tasked to tell them apart. Both networks are trained based on the performance of their adversary and continue getting better in fooling each other. The net result of this iterative process is that the model as a whole becomes better trained and the beauty of it is that the improvement happens with minimal external intervention.
Reinforcement learning. RL in principal is learning through experimentation and exploration. This is a departure from the supervised learning paradigm because the latter relies on known good training data, while RLs start with little knowledge of “how the world works”. RLs operate based on three fundamental elements, namely "States", "Actions", and "Rewards", and citing an example is the best way to understand their significance.
Let us assume that an online sweater merchant deploys an RL to persuade visitors to buy their products. Let us explore the meaning of States, Actions, and Rewards in this context. A unique State, can be an instance where a potential Canadian visitor has spent two minutes exploring various colors of a sweater and has read two reviews of that product. Actions, on the other hand, are steps that the merchant can take to persuade a potential customer to make a purchase (e.g. offering instant discounts, or showing photos of celebrities wearing similar sweaters). Applying an Action in a certain State results in a transition to a new State. After each transition, the RL is rewarded (or penalized) based on an increase (or decrease) in the probability of making a sale. The key point here is that the RLs are initially clueless, but over time they learn to pick policies (sequence of Actions) that work optimally given a certain State (demographics, circumstance, and preferences).
RLs are tremendously important for two reasons. They have produced remarkable results in a wide variety of applications, such as robotics, advertising, and gaming. More importantly, RLs closely mimic the way the human brain evolves from infancy to adulthood.
This leap puts machine intelligence a step closer to human intelligence, enabling machines to apply soft skills such as feeling and intuition to learning.Al Gharakhanian is the Managing Director of Cogneefy. Cogneefy is a service organization helping companies to develop and deploy Machine Learning pipelines that result in significant operational efficiencies. Mr. Gharakhanian has a held senior management positions in ... View Full Bio