Artificial intelligence was far and away the core tech mania of 2018. Even if you’re not an analyst like me who focuses on this technology, it was hard to escape the AI buzz that’s pervading popular culture, media, politics, and, of course, personal technology such as Alexa.
With that in mind, here are my predictions for AI in 2019. Please pardon me in advance for the long, detailed list of trends and forecasts for this technology in the coming years. In looking back through my own coverage of AI over the past year, it became clear that only a multifaceted crystal ball can begin to sketch out where this technology is heading:
AI fakery will work its magic — benign and otherwise -- more deeply into our lives: AI-generated audio has crossed the “uncanny valley” that makes it indistinguishable from what comes out of humans’ actual mouths, as demonstrated publicly this year with Google’s new Duplex digital-assistant technology. Likewise, AI-generated “deepfake” video, audio, and robotics is rapidly eliminating any clues that you or I might latch on to distinguish it all from the real thing.
In 2019, these and other “generative AI” technologies will improve even further. More significantly, they’ll be embedded in a growing range of AI products and services, and enabled through incorporation into developers’ AI DevOps toolchains. The advance of this technology will trigger more teeth-gnashing throughout global culture, inflame more political discussions, and give Hollywood’s science-fiction screenwriters more material to process in their imagination mills.
AI regulations are coming: Facial recognition is one of the most widely adopted applications of AI, and among the most controversial. As facial recognition becomes ubiquitous in smartphones, smart cameras, and online media, more regulations over its use are sure to come. In 2019, many countries are likely to regulate facial recognition, with a key focus on privacy and bias issues. Many regulations over facial recognition will focus on giving consumers the right to opt out of its uses; examine how it’s being used to target them; receive a full accounting of how their facial data is being managed, and request that it be permanently expunged from corporate databases.
Some of the new regulations may apply across the board to all applications of facial recognition, while others will be incrementally applied within the context of existing regulations governing law enforcement, healthcare, e-commerce, social media, autonomous vehicles, and other domains.
AI development frameworks are becoming interchangeable within an open industry ecosystem: The emergence of standard AI DevOps abstraction layers is enabling more developers to build in any language they wish and compile their work for optimized execution in any framework, pipeline, and target hardware, cloud, or server platforms as they wish. The past few years have seen widespread adoption of higher-level AI APIs such as Keras, shared AI model representations such as ONNX, and cross-platform AI-model compilers such as NNVM and TensorRT. In 2019, adoption of these and other standard AI pipeline abstractions will expand, thereby enabling an any-to-any development ecosystem that reduces the potential for lock-in to any AI solution provider’s vertical proprietary stack.
Automated end-to-end AI DevOps pipelines will become standard practice: AI has become an industrialized process in many enterprises, creating a high priority on tools that can automate every process from data preparation to modeling, training, and serving. In 2019, AI tools will extend automation to tasks that have historically required expert human judgment, such as feature engineering, and will democratize access to these capabilities through tools that allow subject matter experts to build, click, train and deploy these models in single-click visual tooling driven by declarative functional specifications.
AI is becoming an industrialized operational business function: AI’s industrialization has taken hold in enterprises everywhere through end-to-end toolchain automation. In 2019, we’ll see users implement — and AI workbench vendors differentiate through offerings — such industrial-grade functions as inline operational experimentation, automated model benchmarking, 24x7 A/B testing, continuous champion-challenger deployment, turbopowered ensembling, and lifecycle model governance.
Kubernetes-orchestrated containers are becoming integral to the AI pipeline: Many AI tool vendors now support building and deployment of containerized statistical models within cloud-native computing environments. By the end of 2019, most vendors in this fast-growing segment will support deployment of containerized AI models for orchestration across Kubernetes clusters in increasingly heterogeneous pipelines. As that trend intensifies throughout the year, most tool vendors will implement the emerging Kubeflow project to support framework-, platform-, and cloud-agnostic data-science DevOps workflows.
Dominant AI development frameworks will be re-engineered for superior cloud-to-edge performance: AI’s magic stems in part from being implemented in the fastest runtime engines available in every development, operations, and edge platform. In 2019, we expect that most cloud platform vendors will roll out versions of this and other principal AI frameworks that are engineered to accelerate all AI DevOps pipeline functions running in GPUs and other principal hardware accelerators in their cloud-to-edge environments.
Google will continue to drive data science industry toolchain evolution around its deepening TensorFlow stack: In 2018, AI developer adoption of Google’s open-source TensorFlow framework expanded, and the vendor made significant new investments both in developing the stack and in engaging the AI community in its evolution. TensorFlow is far and away the dominant AI development framework. In 2019, we predict that the TensorFlow stack will be submitted to an industry group to formalize its development and governance going forward. We also predict that wherever TensorFlow lands in the open-source project ecosystem, it will increasingly converge with the evolving Kubernetes containerization ecosystem, with much of the overlap occurring in AI DevOps-focused projects such as Kubeflow.
Most data scientists will buy certified high-performance AI algorithms, trained models, and training data from online marketplaces: AI initiatives move faster when developers can jumpstart them with the best algorithms, models, and data available for the intended application domains. In 2019, many vendors will launch these marketplaces for those reasons and also provide a potential monetization opportunity both for repurposing of internally developed AI assets and publishing of equivalent assets developed by their partners.
Most labeling of AI training data will be automated through on-demand cloud services: AI’s dominant training approach is still supervised learning, and that relies on the often labor-intensive, time-consuming process of data labeling by human annotators. In 2019, we expect that automated, on-demand training-data labeling services will become a standard components of all data-science DevOps pipeline tools.
Reinforcement learning will become a mainstream AI methodology: Supervised learning is becoming just one of several approaches incorporated into standard data-science workflows. In 2019, the AI industry will begin to incorporate the most widely adopted reinforcement learning frameworks — such as Intel Coach and Ray RL -- into their workbenches. In the coming decade, most AI DevOps workflows will seamlessly incorporate reinforcement learning alongside supervised and unsupervised learning to power more sophisticated embedded intelligence in production enterprise applications.
AI will accelerate the democratization of BI: AI is remaking the business intelligence market inside and out. Over the past few years, one of the core BI trends has been the convergence of the technology’s traditional focus on historical analytics with a new generation of AI-infused predictive analytics, search, and forecasting tools that allow any business user to do many things that used to require a trained data scientist. In 2019, more BI vendors will integrate a deep dose of AI to automate the distillation of predictive insights from complex data, while offering these sophisticated features in solutions that provide self-service simplicity, in-memory interactivity, and guided next-best-action prescriptions.
AI risk-mitigation controls will become standard patterns available in data science pipeline tools: AI is rife with risks, some of which stem from design limitations in a specific buildout of the technology, others from inadequate runtime governance over live AI apps, and still others from the technology’s inscrutable blackbox complexity. In 2019, more commercial AI development tools will incorporate standard workflows and templates for mitigating against privacy encroachments, socioeconomic biases, adversarial vulnerabilities, interpretability deficiencies, and other risk factors that might otherwise crop up in their deliverable applications.
AI data-science team workbenches will ensure downstream reproducibility: Compliance, transparency, and other societal mandates will increasingly require reproducibility of AI-driven algorithmic results. To build reproducibility into their workflows, more data science teams will rely on workbenches that maintain trustworthy audit trails of the specific processes used to develop AI deliverables. In 2019, we expect more vendors of these platforms to deepen their ability to maintain a rich audit trail of the models, data, code, and other artifacts needed to establish downstream reproducibility of AI application lineage.
AI benchmarking frameworks will crystallize and gain adoption: Evaluating the comparative performance of different stacks of AI software, hardware, and cloud services is exceptionally difficult. As the AI arena shifts toward workload-optimized architectures, there’s a growing need for standard benchmarking frameworks to help practitioners assess which target stacks are best suited for training, inferencing, and other workloads. In the past year, the AI industry has moved rapidly to develop open, transparent, and vendor-agnostic frameworks for benchmarking and evaluating the comparative performance of different hardware/software stacks in the running of diverse workloads. The most promising of these initiatives is MLPerf, as judged by the degree of industry participation, the breadth of its mission, the range of target hardware/software environments it includes in its scope, and its progress in putting together useful frameworks for benchmarking today’s top AI challenges.
In 2019, we expect to see MLPerf’s benchmarking suite incorporated into the tooling offered by many AI hardware, software, and cloud services providers. Many of these vendors will start to publish MLPerf benchmarks as a standard practice for new product releases.
GPUs will expand their footprint in immersive AI applications: Graphics processing units have been the heart of the artificial intelligence revolution for many years. In 2019, GPUs’ core image-processing acceleration prowess will become even more central to their AI applications, due to the adoption of intelligent mixed reality, smart-camera, gaming, and other devices and apps that rely on high-fidelity, real-time, and immersive image processing. NVIDIA’s recently announced Turing GPU will become the preferred hardware-accelerator technology due to its real-time raytracing, resolution scaling, variable rate shading, object detection, and other image-processing features.
AI systems-on-chip will dominate the hardware-accelerator wares: AI hardware accelerators are beginning to permeate every tier in distributed cloud-to-edge, high-performance computing, hyperconverged server, and cloud-storage architectures. In 2019, a steady stream of fresh hardware innovations — in GPUs, tensorcore processing units, field programmable gate arrays, and so on — will come to market to support more rapid, efficient, and accurate AI processing in these and other application domains. Throughout the year and beyond, hardware vendors will combine a growing range of AI-accelerator technologies in system-on-chip deployments for highly specific embedded-AI workloads in such domains as intelligent robotics and mobile apps.
Client-side training will move toward the AI mainstream: Client-side training has heretofore been a niche approach for optimizing AI applications. Traditionally client-side training hasn’t been able to produce AI models that are as accurate as those trained in centralized data environments. But client-side training is well-suited for the new world of edge applications, being able to continually update AI models in each device based on the specific data being sensed by that node. In 2019, client-side training will become a linchpin of AI-model learning within edge, mobile, and robotic process automation applications. Already, it’s become standard for device-side AI training in many iOS applications, such as ensuring that Face ID recognizes users consistently, that the device groups people’s pictures accurately in the Photos app, tunes the iPhone’s predictive keyboard, and helps Apple Watch learn your habitual patterns automatically from activity data.
AI is driving closed-loop IT operations management: Over the past several years, AI has become integral to IT operations management, owing to the ability of embedded machine learning tools to automate and accelerate many tasks more scalably, predictably, rapidly and efficiently than manual methods alone. In 2019, we’ll see this trend, which some call “AIOps,” permeate the hyperconverged infrastructure solutions upon which carriers and enterprises are building their high-performance computing environments. Over the coming years, many data scientists will move into teams architecting the AI-driven management backplane that keeps data center storage, compute, and other hardware and networking resources self-optimizing 24x7.
Blockchain will feel its way into the AI ecosystem: The AI community has begun to explore various uses for blockchain. The past year has seen growth in the range of startups providing platforms that serve as AI compute-brokering backbone, decentralized training-data exchange, middleware bus, audit log, and a data lake. None of these deployments is yet mature or widely adopted within the AI developer ecosystem. In 2019, it’s likely that the principal public cloud solution providers — especially AWS, Microsoft Azure, Google Cloud Platform, and IBM Cloud — will acquire some of the more promising startups and add them to their respective AI toolchain portfolios to address specific ecosystem requirements that might benefit from special-purpose distributed, trusted hyperledgers.
Did I leave anything out? Of course. This list doesn’t even begin to discuss the trends for AI-infused application segments such as chatbots, smart cameras, autonomous vehicles, and so on.
AI is such a fertile ground for innovation in every sector of our lives that it’s futile to try to predict every possible evolution path it might take. Unlike just a few years ago, AI is ubiquitous, and that pervasiveness will only accelerate.