Navigate Turbulence with the Resilience of Responsible AI

Extraordinary economic conditions require brand-new analytic models, right? Not if existing predictive models are built with responsible AI. Here’s how to tell.

Guest Commentary, Guest Commentary

July 22, 2020

4 Min Read

The COVID-19 pandemic has caused data scientists and business leaders alike to scramble, looking for answers to urgent questions about the analytic models they rely on. Financial institutions, companies and the customers they serve are all grappling with unprecedented conditions, and a loss of control that may seem best remedied with completely new decision strategies. If your company is contemplating a rush to crank out brand-new analytic models to guide decisions in this extraordinary environment, wait a moment. Look carefully at your existing models, first.

Existing models that have been built responsibly -- incorporating artificial intelligence (AI) and machine learning (ML) techniques that are robust, explainable, ethical, and efficient -- have the resilience to be leveraged and trusted in today's turbulent environment. Here’s a checklist to help determine if your company’s models have what it takes. 

Robustness

In an age of cloud services and opensource, there are still no “fast and easy” shortcuts to proper model development. AI models that are produced with the proper data and scientific rigor are robust, and capable of thriving in tough environments like the one we are experiencing now.

A robust AI development practice includes a well-defined development methodology; proper use of historical, training and testing data; a solid performance definition; careful model architecture selection; and processes for model stability testing, simulation and governance. Importantly, all these factors must be adhered to by the entire data science organization. 

Let me emphasize the importance of relevant data, particularly historic data. Data scientists need to assess, as much as possible, all the different customer behaviors that might be encountered in the future: suppressed incomes such as during a recession, and hoarding behaviors associated with natural disasters, to name just two. Additionally, the models’ assumptions must be tested to make sure they can withstand wide shifts in the production environment.

Explainable AI

Neural networks can find complex nonlinear relationships in data, leading to strong predictive power, a key component of an AI. But many organizations hesitate to deploy “black box” machine learning algorithms because, while their mathematical equations are often straightforward, deriving a human-understandable interpretation is often difficult. The result is that even ML models with improved business value may be inexplicable -- a quality incompatible with regulated industries -- and thus are not deployed into production.

To overcome this challenge, companies can use a machine learning technique called interpretable latent features. This leads to an explainable neural network architecture, the behavior that can be easily understood by human analysts. Notably, as a key ingredient of Responsible AI, model explainability should be the primary goal, followed by predictive power.

Ethical AI

ML learns relationships between data to fit a particular objective function (or goal). It will often form proxies for avoided inputs, and these proxies can show bias. From a data scientist’s point of view, ethical AI is achieved by taking precautions to expose what the underlying machine learning model has learned, and test if it could impute bias.

These proxies can be activated more by one data class than another, resulting in the model producing biased results. For example, if a model includes the brand and version of an individual’s mobile phone, that data can be related to the ability to afford an expensive cell phone -- a characteristic that can impute income and, in turn, bias.

A rigorous development process, coupled with visibility into latent features, helps ensure that the analytics models your company uses function ethically. Latent features should continually be checked for bias in changing environments.

Efficient AI

Efficient AI doesn’t refer to building a model quickly; it means building it right the first time. To be truly efficient, models must be designed from inception to run within an operational environment, one that will change. These models are complicated and cannot be left to each data scientist’s artistic preferences. Rather, in order to achieve Efficient AI, models must be built according to a company-wide model development standard, with shared code repositories, approved model architectures, sanctioned variables, and established bias testing and stability standards for models. This dramatically reduces errors in model development that, ultimately, would get exposed otherwise in production, cutting into anticipated business value and negatively impacting customers.

As we have seen with the COVID-19 pandemic, when conditions change, we must know how the model responds, what will it be sensitive to, how we can determine if it is still unbiased and trustworthy, or if strategies in using it should be changed. Being efficient is having those answers codified through a model development governance blockchain that persists the information about the model. This approach puts every development detail about the model at your fingertips -- which is what you’ll need during a crisis.

Altogether, achieving responsible AI isn’t easy, but in navigating unpredictable times, responsibly developed analytic models allow your company to adjust decisively, and with confidence.

Scott_Zoldi_FICO.jpg

Scott Zoldi is Chief Analytics Officer of FICO, a Silicon Valley software company. He has authored 110 patent applications, with 56 granted and 54 pending.

About the Author

Guest Commentary

Guest Commentary

The InformationWeek community brings together IT practitioners and industry experts with IT advice, education, and opinions. We strive to highlight technology executives and subject matter experts and use their knowledge and experiences to help our audience of IT professionals in a meaningful way. We publish Guest Commentaries from IT practitioners, industry analysts, technology evangelists, and researchers in the field. We are focusing on four main topics: cloud computing; DevOps; data and analytics; and IT leadership and career development. We aim to offer objective, practical advice to our audience on those topics from people who have deep experience in these topics and know the ropes. Guest Commentaries must be vendor neutral. We don't publish articles that promote the writer's company or product.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights