How We’ll Conduct Algorithmic Audits in the New Economy
Today’s CIOs traverse a minefield of risk, compliance, and cultural sensitivities when it comes to deploying algorithm-driven business processes.
Algorithms are the heartbeat of applications, but they may not be perceived as entirely benign by their intended beneficiaries.
Most educated people know that an algorithm is simply any stepwise computational procedure. Most computer programs are algorithms of one sort of another. Embedded in operational applications, algorithms make decisions, take actions, and deliver results continuously, reliably, and invisibly. But on the odd occasion that an algorithm stings -- encroaching on customer privacy, refusing them a home loan, or perhaps targeting them with a barrage of objectionable solicitation -- stakeholders’ understandable reaction may be to swat back in anger, and possibly with legal action.
Regulatory mandates are starting to require algorithm auditing
Today’s CIOs traverse a minefield of risk, compliance, and cultural sensitivities when it comes to deploying algorithm-driven business processes, especially those powered by artificial intelligence (AI), deep learning (DL), and machine learning (ML).
Many of these concerns revolve around the possibility that algorithmic processes can unwittingly inflict racial biases, privacy encroachments, and job-killing automations on society at large, or on vulnerable segments thereof. Surprisingly, some leading tech industry execs even regard algorithmic processes as a potential existential threat to humanity. Other observers see ample potential for algorithmic outcomes to grow increasingly absurd and counterproductive.
Lack of transparent accountability for algorithm-driven decision making tends to raise alarms among impacted parties. Many of the most complex algorithms are authored by an ever-changing, seemingly anonymous cavalcade of programmers over many years. Algorithms’ seeming anonymity -- coupled with their daunting size, complexity and obscurity -- presents the human race with a seemingly intractable problem: How can public and private institutions in a democratic society establish procedures for effective oversight of algorithmic decisions?
Much as complex bureaucracies tend to shield the instigators of unwise decisions, convoluted algorithms can obscure the specific factors that drove a specific piece of software to operate in a specific way under specific circumstances. In recent years, popular calls for auditing of enterprises’ algorithm-driven business processes has grown. Regulations such as the European Union (EU)’s General Data Protection Regulation may force your hand in this regard. GDPR prohibits any “automated individual decision-making” that “significantly affects” EU citizens.
Specifically, GDPR restricts any algorithmic approach that factors a wide range of personal data -- including behavior, location, movements, health, interests, preferences, economic status, and so on—into automated decisions. The EU’s regulation requires that impacted individuals have the option to review the specific sequence of steps, variables, and data behind a particular algorithmic decision. And that requires that an audit log be kept for review and that auditing tools support rollup of algorithmic decision factors.
Considering how influential GDPR has been on other privacy-focused regulatory initiatives around the world, it wouldn’t be surprising to see laws and regulations mandate these sorts of auditing requirements placed on businesses operating in most industrialized nations before long.
For example, US federal lawmakers introduced the Algorithmic Accountability Act in 2019 to require companies to survey and fix algorithms that result in discriminatory or unfair treatment.
Anticipating this trend by a decade, the US Federal Reserve’s SR-11 guidance on model risk management, issued in 2011, mandates that banking organizations conduct audits of ML and other statistical models in order to be alert to the possibility of financial loss due to algorithmic decisions. It also spells out the key aspects of an effective model risk management framework, including robust model development, implementation, and use; effective model validation; and sound governance, policies, and controls.
Even if one’s organization is not responding to any specific legal or regulatory requirements for rooting out evidence of fairness, bias, and discrimination in your algorithms, it may be prudent from a public relations standpoint. If nothing else, it would signal enterprise commitment to ethical guidance that encompasses application development and machine learning DevOps practices.
But algorithms can be fearsomely complex entities to audit
CIOs need to get ahead of this trend by establishing internal practices focused on algorithm auditing, accounting, and transparency. Organizations in every industry should be prepared to respond to growing demands that they audit the complete set of business rules and AI/DL/ML models that their developers have encoded into any processes that impact customers, employees, and other stakeholders.
Of course, that can be a tall order to fill. For example, GDPR’s “right to explanation” requires a degree of algorithmic transparency that could be extremely difficult to ensure under many real-world circumstances. Algorithms' seeming anonymity -- coupled with their daunting size, complexity, and obscurity--presents a thorny problem of accountability. Compounding the opacity is the fact that many algorithms -- be they machine learning, convolutional neural networks, or whatever -- are authored by an ever-changing, seemingly anonymous cavalcade of programmers over many years.
Most organizations -- even the likes of Amazon, Google, and Facebook -- might find it difficult to keep track of all the variables encoded into its algorithmic business processes. What could prove even trickier is the requirement that they roll up these audits into plain-English narratives that explain to a customer, regulator, or jury why a particular algorithmic process took a specific action under real-world circumstances. Even if the entire fine-grained algorithmic audit trail somehow materializes, you would need to be a master storyteller to net it out in simple enough terms to satisfy all parties to the proceeding.
Throwing more algorithm experts at the problem (even if there were enough of these unicorns to go around) wouldn’t necessarily lighten the burden of assessing algorithmic accountability. Explaining what goes on inside an algorithm is a complicated task even for the experts. These systems operate by analyzing millions of pieces of data, and though they work quite well, it’s difficult to determine exactly why they work so well. One can’t easily trace their precise path to a final answer.
Algorithmic auditing is not for the faint of heart, even among technical professionals who live and breathe this stuff. In many real-world distributed applications, algorithmic decision automation takes place across exceptionally complex environments. These may involve linked algorithmic processes executing on myriad runtime engines, streaming fabrics, database platforms, and middleware fabrics.
Most of the people you’re training to explain this stuff to may not know a machine-learning algorithm from a hole in the ground. More often than we’d like to believe, there will be no single human expert -- or even (irony alert) algorithmic tool -- that can frame a specific decision-automation narrative in simple, but not simplistic, English. Even if you could replay automated decisions in every fine detail and with perfect narrative clarity, you may still be ill-equipped to assess whether the best algorithmic decision was made.
Given the unfathomable number, speed, and complexity of most algorithmic decisions, very few will, in practice, be submitted for post-mortem third-party reassessment. Only some extraordinary future circumstance -- such as a legal proceeding, contractual dispute, or showstopping technical glitch -- will compel impacted parties to revisit those automated decisions.
And there may even be fundamental technical constraints that prevent investigators from determining whether a particular algorithm made the best decision. A particular deployed instance of an algorithm may have been unable to consider all relevant factors at decision time due to lack of sufficient short-term, working, and episodic memory.
Establishing standard approach to algorithmic auditing
CIOs should recognize that they don’t need to go it alone on algorithm accounting. Enterprises should be able to call on independent third-party algorithm auditors. Auditors may be called on to review algorithms prior to deployment as part of the DevOps process, or post-deployment in response to unexpected legal, regulatory, and other challenges.
Some specialized consultancies offer algorithm auditing services to private and public sector clients. These include:
BNH.ai: This firm describes itself as a “boutique law firm that leverages world-class legal and technical expertise to help our clients avoid, detect, and respond to the liabilities of AI and analytics.” It provides enterprise-wide assessments of enterprise AI liabilities and model governance practices; AI incident detection and response, model- and project-specific risk certifications; and regulatory and compliance guidance. It also trains clients’ technical, legal and risk personnel how to perform algorithm audits.
O’Neil Risk Consulting and Algorithmic Auditing: ORCAA describes itself as a “consultancy that helps companies and organizations manage and audit algorithmic risks.” It works with clients to audit the use of a particular algorithm in context, identifying issues of fairness, bias, and discrimination and recommending steps for remediation. It helps clients to institute “early warning systems” that flag when a problematic algorithm (ethical, legal, reputational, or otherwise) is in development or in production, and thereby escalate the matter to the relevant parties for remediation. They serve as expert witnesses to assist public agencies and law firms in legal actions related to algorithmic discrimination and harm. They help organizations develop strategies and processes to operationalize fairness as they develop and/or incorporate algorithmic tools. They work with regulators to translate fairness laws and rules into specific standards for algorithm builders. And they train client personnel on algorithm auditing.
Currently, there are few hard-and-fast standards in algorithm auditing. What gets included in an audit and how the auditing process is conducted are more or less defined by every enterprise that undertakes it, or by the specific consultancy being engaged to conduct it. Looking ahead to possible future standards in algorithm auditing, Google Research and Open AI teamed with a wide range of universities and research institutes last year to publish a research study that recommends third-party auditing of AI systems. The paper also recommends that enterprises:
Develop audit trail requirements for “safety-critical applications” of AI systems;
Conduct regular audits and risk assessments associated with the AI-based algorithmic systems that they develop and manage;
Institute bias and safety bounties to strengthen incentives and processes for auditing and remediating issues with AI systems;
Share audit logs and other information about incidents with AI systems through their collaborative processes with peers;
Share best practices and tools for algorithm auditing and risk assessment; and
Conduct research into the interpretability and transparency of AI systems to support more efficient and effective auditing and risk assessment.
Other recent AI industry initiatives relevant to standardization of algorithm auditing include:
Google published an internal audit framework that is designed help enterprise engineering teams audit AI systems for privacy, bias, and other ethical issues before deploying them.
AI researchers from Google, Mozilla, and the University of Washington published a paper that outlines improved processes for auditing and data management to ensure that ethical principles are built into DevOps workflows that deploy AI/DL/ML algorithms into applications.
The Partnership on AI published a database to document instances in which AI systems fail to live up to acceptable anti-bias, ethical, and other practices.
Recommendations
CIOs should explore how best to institute algorithmic auditing in their organizations’ DevOps practices.
Whether you choose to train and staff internal personnel to provide algorithmic auditing or engage an external consultancy in this regard, the following recommendations are important to heed:
Professional auditors should receive training and certification according to generally accepted curricula and standards.
Auditors should use robust, well-documented, and ethical best practices based on some professional consensus.
Auditors that take bribes, have conflicts of interest, and/or rubberstamp algorithms into order to please clients should be forbidden from doing business.
Audit scopes should be clearly and comprehensively stated in order to make clear what aspects of the audited algorithms may have been excluded as well as why they were not addressed (e.g., to protect sensitive corporate intellectual property).
Algorithmic audits should be a continuing process that kicks in periodically, or any time a key model or its underlying data change.
Audits should dovetail with the requisite remediation processes needed to correct any issues identified with the algorithms under scrutiny.
Last but not least, final algorithmic audit reports should be disclosed to the public in much the same way that publicly traded businesses share financial statements. Likewise, organizations should publish their algorithmic auditing practices in much the same way that they publish privacy practices.
Whether or not these last few steps are required by legal or regulatory mandates is beside the point. Algorithm auditors should always consider the reputational impact on their companies, their clients and themselves if they fail to maintain anything less than the highest professional standards.
Full transparency of auditing practices is essential for maintaining stakeholder trust in your organization’s algorithmic business processes.
About the Author
You May Also Like