AI Ethics Guidelines Every CIO Should Read

You don’t need to come up with an AI ethics framework out of thin air. Here are five of the best resources to get technology and ethics leaders started.

Guest Commentary, Guest Commentary

August 7, 2019

5 Min Read

Technology experts predict the rate of adoption of artificial intelligence and machine learning will skyrocket in the next two years. These advanced technologies will spark unprecedented business gains, but along the way enterprise leaders will be called to quickly grapple with a smorgasbord of new ethical dilemmas. These include everything from AI algorithmic bias and data privacy issues to public safety concerns from autonomous machines running on AI.

Because AI technology and use cases are changing so rapidly, chief information officers and other executives are going to find it difficult to keep ahead of these ethical concerns without a roadmap. To guide both deep thinking and rapid decision-making about emerging AI technologies, organizations should consider developing an internal AI ethics framework. 

The framework won’t be able to account for all the situations an enterprise will encounter on its journey to increased AI adoption. But it can lay the groundwork for future executive discussions. With a framework in hand, they can confidently chart a sensible path forward that aligns with the company’s culture, risk tolerance, and business objectives.

The good news is that CIOs and executives don’t need to come up with an AI ethics framework out of thin air. Many smart thinkers in the AI world have been mulling over ethics issues for some time and have published several foundational guidelines that an organization can use to draft a framework that makes sense for their business. Here are five of the best resources to get technology and ethics leaders started.

Future of Life Institute

Asilomar AI Principles

Developed in conjunction with the 2017 Asilomar conference, this list of principles has been universally cited as a reference point by all other AI ethics frameworks and standards introduced since it was published. Signed by more than 1,200 AI and robotics researchers and over 2,500 other technical luminaries, including the likes of Stephen Hawking, Elon Musk, and Ray Kurzweil, it offers a simple list of foundational principles that should guide business leaders, governmental policymakers, and technologists as we move forward with AI advancement.

IAPP

Building Ethics Into Privacy Frameworks for Big Data and AI

Not a framework itself, per se, this handy document is nevertheless a must-read for enterprise executives trying to get their arms around AI ethics issues. It offers a concise explanation of the ethical concerns at play in applied uses of AI and big data, as well as the consequences of ignoring these issues. It then offers a condensed run-down of the tools available to organizations seeking to not only develop internal frameworks but to operationalize data ethics policies. It guides AI ethics leaders in considering industry-specific concerns, organizational nuances and even departmental differences in creating a framework that is as flexible as it is holistic.

According to the IAPP report: “Various data ethics frameworks should have common features to ensure a uniformly high ethical standard of data practices. However, these frameworks will be most effective if they are flexible enough to be tailored for each specific company and organization, adjusting for a company’s size, resources, subject matter area, and impact on data subjects.”

IEEE

The IEEE Global Initiative on Autonomous Systems

Since 2016, IEEE has been taking the lead on organizing discourse among technical thinkers, business leaders, and public policy experts about the ethical design of autonomous and intelligent systems. As part of this work, The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems published Ethically Aligned Design, a veritable bible for addressing “values and attentions as well as implementations” of these systems. This nearly 300-page digital volume offers a ton of value to executives embarking on the journey of developing an internal framework.

In addition, The IEEE Global Initiative spearheads work on IEEE P7000 Standards Working Groups, which offers standards guidance processes for addressing ethical concerns during system design, data privacy, algorithmic bias, and other hot topics in AI ethics.

The Public Voice

Universal Guidelines for Artificial Intelligence

Introduced in October 2018, these guidelines were written to be “incorporated into ethical standards, adopted in national law and international agreements, and built into the design of systems.”  It’s a human rights-driven document with an emphasis on transparency, fairness from bias, data accuracy and quality, and an obligation by government to curtail secret profiling or scoring of citizens.

“Our concern is with those systems that impact the rights of people,” write its creators. “Above all else, these systems should do no harm.”

EU Council of Europe

Guidelines on Artificial Intelligence and Data Protection

Drafted by an independent group of AI ethics advisors and tweaked using more than 500 comments during a five-month feedback period, this is one of the most recent and comprehensive public frameworks on AI ethics to date.  This is not an official policy document or regulation by the European Commission, but instead a set of suggestions meant to guide public discourse on what trustworthy AI looks like.

These guidelines are intended to help AI designers and users choose systems that are lawful, ethical, and robust, with seven system tenets at the heart of what it takes to create trustworthy AI:

  • Human agency and oversight: Including fundamental rights, human agency and human oversight

  • Technical robustness and safety: Including resilience to attack and security, fall-back plan and general safety, accuracy, reliability and reproducibility

  • Privacy and data governance: Including respect for privacy, quality and integrity of data, and access to data

  • Transparency: Including traceability, explainability and communication

  • Diversity, non-discrimination and fairness: Including the avoidance of unfair bias, accessibility and universal design, and stakeholder participation

  • Societal and environmental wellbeing: Including sustainability and environmental friendliness, social impact, society and democracy

  • Accountability: Including auditability, minimization and reporting of negative impact, trade-offs and redress.

John_McClurg-Cylance.jpg

John McClurg is VP & Ambassador-At-Large at BlackBerry Cylance. He came to BlackBerry Cylance from Dell, where he served as its CSO, advancing responsibilities that included the strategic focus and tactical operations of Dell’s internal global security services, both physical and cyber. Before joining Dell, McClurg served at Honeywell International; Lucent Technologies/Bell Laboratories; and in the Federal Bureau of Investigation (FBI), where he held an assignment with the U.S. Department of Energy (DOE) as a Branch Chief charged with establishing a Cyber-Counterintelligence program within the DOE’s newly created Office of Counterintelligence.

 

About the Author

Guest Commentary

Guest Commentary

The InformationWeek community brings together IT practitioners and industry experts with IT advice, education, and opinions. We strive to highlight technology executives and subject matter experts and use their knowledge and experiences to help our audience of IT professionals in a meaningful way. We publish Guest Commentaries from IT practitioners, industry analysts, technology evangelists, and researchers in the field. We are focusing on four main topics: cloud computing; DevOps; data and analytics; and IT leadership and career development. We aim to offer objective, practical advice to our audience on those topics from people who have deep experience in these topics and know the ropes. Guest Commentaries must be vendor neutral. We don't publish articles that promote the writer's company or product.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights