Common AI Ethics Mistakes Companies Are Making

More organizations are embracing the concept of responsible AI, but faulty assumptions can impede success.

Lisa Morgan, Freelance Writer

March 19, 2021

5 Min Read
Image: Kentoh - stock.adobe.com

Ethical AI. Responsible AI. Trustworthy AI. More companies are talking about AI ethics and its facets, but can they apply them? Some organizations have articulated responsible AI principles and values but they're having trouble translating that into something that can be implemented. Other companies are further along because they started earlier, but some of them have faced considerable public backlash for making mistakes that could have been avoided.

The reality is that most organizations don't intend to do unethical things with AI. They do them inadvertently. However, when something goes wrong, customers and the public care less about the company's intent than what happened as the result of the company's actions or failure to act.

Following are a few reasons why companies are struggling to get responsible AI right.

They're focusing on algorithms

Business leaders have become concerned about algorithmic bias because they realize it's become a brand issue. However, responsible AI requires more.

"An AI product is never just an algorithm. It's a full end-to-end system and all the [related] business processes," said Steven Mills, managing director, partner and chief AI ethics officer at Boston Consulting Group (BCG). "You could go to great lengths to ensure that your algorithm is as bias-free as possible but you have to think about the whole end-to-end value chain from data acquisition to algorithms to how the output is being used within the business."

By narrowly focusing on algorithms, organizations miss a lot of sources of potential bias.

They're expecting too much from principles and values

More organizations have articulated responsible AI principles and values, but in some cases they're little more than marketing veneer. Principles and values reflect the belief system that underpins responsible AI. However, companies aren't necessarily backing up their proclamations with anything real.

"Part of the challenge lies in the way principles get articulated. They're not implementable," said Kjell Carlsson, principal analyst at Forrester Research, who covers data science, machine learning, AI, and advanced analytics. "They're written at such an aspirational level that they often don't have much to do with the topic at hand."

Kjell-Carlsson-Forrester.jpg

BCG calls the disconnect the "responsible AI gap" because its consultants run across the issue so frequently. To operationalize responsible AI, Mills recommends:

  • Having a responsible AI leader

  • Supplementing principles and values with training

  • Breaking principles and values down into actionable sub-items

  • Putting a governance structure in place

  • Doing responsible AI reviews of products to uncover and mitigate issues

  • Integrating technical tools and methods so outcomes can be measured

  • Have a plan in place in case there's a responsible AI lapse that includes turning the system off, notifying customers and enabling transparency into what went wrong and what was done to rectify it

They've created separate responsible AI processes

Ethical AI is sometimes viewed as a separate category such as privacy and cybersecurity. However, as the latter two functions have demonstrated, they can't be effective when they operate in a vacuum.

"[Organizations] put a set of parallel processes in place as sort of a responsible AI program. The challenge with that is adding a whole layer on top of what teams are already doing," said BCG's Mills. "Rather than creating a bunch of new stuff, inject it into your existing process so that we can keep the friction as low as possible."

That way, responsible AI becomes a natural part of a product development team's workflow and there's far less resistance to what would otherwise be perceived as another risk or compliance function which just adds more overhead. According to Mills, the companies realizing the greatest success are taking the integrated approach.

They've created a responsible AI board without a broader plan

Ethical AI boards are necessarily cross-functional groups because no one person, regardless of their expertise, can foresee the entire landscape of potential risks. Companies need to understand from legal, business, ethical, technological and other standpoints what could possibly go wrong and what the ramifications could be.

Be mindful of who is selected to serve on the board, however, because their political views, what their company does, or something else in their past could derail the endeavor. For example, Google dissolved its AI ethics board after one week because of complaints about one member's anti-LGBTQ views and the fact that another member was the CEO of a drone company whose AI was being used for military applications.

More fundamentally, these boards may be formed without an adequate understanding of what their role should be.

Steve_Mills-BostonConsultingGroup.jpg

"You need to think about how to put reviews in place so that we can flag potential issues or potentially risky products," said BCG's Mills. "We may be doing things in the healthcare industry that are inherently riskier than advertising, so we need those processes in place to elevate certain things so the board can discuss them. Just putting a board in place doesn't help."

Companies should have a plan and strategy for how to implement responsible AI within the organization [because] that's how they can affect the greatest amount of change as quickly as possible,

"I think people have a tendency to do point things that seem interesting like standing up a board, but they're not weaving it into a comprehensive strategy and approach," said Mills.

Bottom line

There's more to responsible AI than meets the eye as evidenced by the relatively narrow approach companies take. It's a comprehensive endeavor that requires planning, effective leadership, implementation and evaluation as enabled by people, processes and technology.

Related Content:

How to Explain AI, ML, and NLP to Business Leaders in Plain Language

How Data, Analytics & AI Shaped 2020, and Will Impact 2021

AI One Year Later: How the Pandemic Impacted the Future of Technology

 

About the Author

Lisa Morgan

Freelance Writer

Lisa Morgan is a freelance writer who covers business and IT strategy and emerging technology for InformationWeek. She has contributed articles, reports, and other types of content to many technology, business, and mainstream publications and sites including tech pubs, The Washington Post and The Economist Intelligence Unit. Frequent areas of coverage include AI, analytics, cloud, cybersecurity, mobility, software development, and emerging cultural issues affecting the C-suite.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights