Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.
December 22, 2020
5 Min Read
Image: AldanNa - stock.adobe.com
You have been tasked with using artificial intelligence to predict when you might lose a customer. You work with historical data and business stakeholders. You build a model that shows it could have predicted lost customers a few weeks before they churned. You show the algorithm off, prove it can work and demonstrate how much it will save the company in lost revenue. Leadership likes it. It’s time to implement it. Now what? This is where eight out of 10 AI projects fail, not because the tech doesn’t work but because moving from a one-and-done experiment to a fully operationalized enterprise solution requires different skills.
To get there, you’ll need a steady supply of regularly updated data that is clean, standardized and ready to leverage. You need enough computational power and human resources to keep the system running and you need overhead to be as low as possible to justify the ROI. You need to test the quality of the output with rigor and ensure that when your model meets the real world, it still works. When (not if) it doesn’t, you need a continuous feedback loop and method of repair so that an error doesn’t shut the whole system down.
Does this sound familiar? Manufacturing companies have mastered the science of bringing a prototype to production. They transform raw materials into standardized bits that they assemble into products. They analyze components for quality and release them to consumers as error-free finished products. They do this by adhering to many governing principles. Here are the four that matter most to AI:
1. Standardize your inputs. With AI as in manufacturing, you must start with standardized raw materials. A factory that turns raw materials into a finished product will produce an inconsistent output if the inputs are inconsistent. AI that leverages inconsistent or poor-quality data will produce inconsistent results. You must vet your data sources for reliability and consistency and clean, pre-process and transform your data to prepare it for your models. This process must be ongoing and regular.
2. Eliminate waste. Once you have scale, the smallest waste becomes a huge cost, which is why manufacturing companies are experts at keeping waste to a minimum. Every time your AI runs, it costs something. A predictive model may be fast enough on a laptop to miss the effects of tiny inefficiencies in the code but become a bloated data hog when it’s deployed across the firm. If you’ve built a lead-scoring model that tells salespeople how valuable a lead may be by assigning a score to each lead in their CRM, how often should it update? Should the system refresh the data every time one of your customers performs a relevant action? Or can you reduce the number of refreshes and save on data costs by updating lead scores right before your team determines its engagement strategy?
3. Embrace total quality management. Factories, no matter how well run, produce some amount of defective output. When small errors happen at scale, it can produce disastrous, expensive results. That’s why manufacturers design, staff and maintain each assembly line with the expectation of defects in mind. Every person and part are carefully placed with the objective of minimizing problematic output. Your AI will have defects. One day it will call a good lead a bad lead or predict a customer will churn the day before she signs a six-month contract extension. You must detect and repair those defects before errors compound. Design your system with total quality management in mind. Define your quality expectations so you can measure them. Determine where errors can occur and how you can configure your system to minimize them.
4. Create feedback mechanisms. Do you remember the scene from Willy Wonka and the Chocolate Factory that begins with a demonstration of how Wonka’s factory determined which golden eggs were good and which were bad, and ends with the machinery deeming Veruka Salt a “bad egg”? Like Wonka’s fantasy factory, a real one that pulverizes gravel into powder has QC systems in place that ensure quality output. You’ll need a feedback mechanism that can test and detect your AI’s mistakes. Furthermore, you’ll need to design your system so that it’s repairable. If you don’t design it for serviceability, you won’t know when it breaks or how or where it has broken. The goal should be designing a system that can keep functioning after it detects an error. You don’t want to rely on the data scientists who built the model to keep remaking it every time it breaks. Data scientists make lousy and -- more importantly --unhappy repair people.
Another crucial piece of the feedback puzzle comes from end users, and here’s where AI has an advantage over manufacturing. Manufacturers rely on processing physical returns from retailers and customers. Developers can design AI systems with instant feedback mechanisms as part of the user experience. Amazon’s Alexa sometimes asks whether she’s given you a satisfactory answer. Sales teams with lead scoring embedded in their CRMs can click a thumbs-up or thumbs-down icon to give feedback on recommendations. This kind of feedback can be processed and monitored on an ongoing basis and used to guide improvements.
Scaling AI is a big focus for many organizations today, especially as organizations increase their investments in automation in response to world events. There’s considerable value in standing on the shoulders of the giants in industry who have learned from several decades of experiments in manufacturing. The goal is to standardize the prototype-to-production journey with optimal processes and a continuous improvement mindset. As you navigate your journey toward operationalized AI, don’t rely on prototype designers to manufacture and distribute their inventions. Keep your inventors inventing. Develop the right skillsets and experience -- with guidance from the manufacturing floor -- to take those inventions, scale them, and ensure that their output is consistent, high quality and worth the investment.
Arun Shastri, PhD, leads the AI practice at ZS Associates. He is an expert in analytics organization design, data science and advanced analytics, analytics capability building and analytics process outsourcing.
PKS Prakash, PhD is an associate principal at ZS Associates; he designs and implements advanced data science and AI techniques across multiple verticals including healthcare, hospitality, retail and manufacturing.
About the Author(s)
The InformationWeek community brings together IT practitioners and industry experts with IT advice, education, and opinions. We strive to highlight technology executives and subject matter experts and use their knowledge and experiences to help our audience of IT professionals in a meaningful way. We publish Guest Commentaries from IT practitioners, industry analysts, technology evangelists, and researchers in the field. We are focusing on four main topics: cloud computing; DevOps; data and analytics; and IT leadership and career development. We aim to offer objective, practical advice to our audience on those topics from people who have deep experience in these topics and know the ropes. Guest Commentaries must be vendor neutral. We don't publish articles that promote the writer's company or product.
You May Also Like