If you want build trust in machine learning, try treating it like a human, asking it the same type of questions.
During the 2008 financial crisis, the banking industry realized that their machine learning algorithms were based on flawed assumptions. So financial system regulators decided that additional controls were needed, and regulatory requirements for “model risk” management on banks and insurers were introduced.
Banks also had to prove that they understood the models they were using, so, regrettably but understandably, they deliberately limited the complexity of their technology, resorting to generalized linear models that offered simplicity and interpretability above all else.
In the past several years, machine learning and AI have made enormous strides in accuracy. Yet regulated industries (like banking) remain hesitant, often prioritizing regulatory compliance and algorithm interpretability over accuracy and efficiency. Some businesses even consider the technology untrustworthy, or dangerous.
In order to trust the recommendations AI and machine learning provide, businesses from all industries need to work to better understand it. Data scientists and PhDs shouldn’t be the only ones capable of clearly explaining machine learning models, because as AI theorist Eliezer Yudkowsky states, “By far, the greatest danger of AI is that people conclude too early that they understand it.”
Trust requires a human approach
When data scientists are asked how a machine learning model makes decisions, they tend to rattle off complex mathematical equations, leaving laymen dumbfounded and the question of how one can trust the model unanswered. Wouldn’t it be more productive to approach machine learning decision-making in the same way one would approach human decision-making? As Udacity co-founder Sebastian Thrun once said, “…artificial intelligence is almost a humanities discipline. It's really an attempt to understand human intelligence and human cognition.”
So, rather than using complex mathematical equations to determine how, say, a human loan officer makes their decisions, one would simply ask, "Which information on the loan application form is the most important to your decision?" Or, "What values indicate good or bad risks, and how did you decide to accept or reject some specific examples of loan applications?"
An equally human approach is possible in determining how algorithms make similar decisions. For instance, by using a machine learning technique called feature impact, one could determine that the revolving utility balance, the applicant’s income, and the purpose of the loan are the top three most important pieces of information to the loan officer’s algorithm.
By using a capability called reason codes, one could see the most important factors in the estimate of each loan applicant’s details, and by leveraging a technique called partial dependence, one could see that the algorithm scores higher income loan applications as lower risk.
The value of objectivity, scalability, and predictability
In addition to better understanding AI and machine learning by analyzing its decision making as a human would, trust can be obtained by recognizing the unique abilities the technology has to offer, including:
● Solving the problem of credibility and data outliers: Traditional statistical models typically require assumptions about how the data was created, the processes underlying that data, and the credibility of that data. Machine learning, however, removes these restrictive assumptions by using highly flexible algorithms that don’t give more credibility than it deserves.
● Supporting modern computers and massive data sets: Unlike manual processes, machine learning doesn’t assume that the world is full of straight lines. Instead, it adjusts equations automatically to pinpoint the best patterns and test which algorithms and patterns work best against independent validation data (rather than testing only the data it was trained on).
● Leveraging missing values to predict the future: Rather than requiring hours of data cleansing, advanced machine learning can build a blueprint that optimizes the data for that specific algorithm, automatically detecting missing values, determining which algorithms don’t work with missing values, finding the optimal value to substitute for missing values, and using the presence of missing values to predict different outcomes.
Instead of doubting AI or machine learning recommendations, let’s work to better understand them by asking the same reasoning questions we’d ask a human. Let’s recognize the technology’s objective power in reducing data outlier credibility and its ability to provide scalable flexibility for the massive amounts of data available today.
Perhaps most importantly, let’s acknowledge AI and machine learning’s capability to better predict future outcomes by leveraging absent information. Because while the technology is certainly powerful enough to warrant vigilance and formal regulation, consumers and businesses alike only stand to benefit if a proper understanding and level of trust can be established.
Colin Priest is the Director of Product Marketing for DataRobot, where he advises businesses on how to build business cases and successfully manage data science projects. Colin has held a number of CEO and general management roles, where he has championed data science initiatives in financial services, healthcare, security, oil and gas, government and marketing.
The InformationWeek community brings together IT practitioners and industry experts with IT advice, education, and opinions. We strive to highlight technology executives and subject matter experts and use their knowledge and experiences to help our audience of IT ... View Full Bio
We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.