As artificial intelligence becomes more commonplace in the enterprise, more IT leaders are becoming concerned about the ethical implications of AI. In fact, a 2019 Vanson Bourne report sponsored by SnapLogic found that 94% of the 1,000 U.S. and UK IT decision makers surveyed believe that people should be paying more attention to corporate responsibility and ethics in AI development.
You don't have to look hard to find some reasons for their concern. Several prominent tech companies have become embroiled in scandals after AIs they created failed to behave as desired. For example, in 2015, Google drew criticism after users complained when its image recognition software was tagging black human faces as "gorillas." Although the tech giant promised to fix the problem, three years later the only "fix" it had in place was to remove the AI's ability to identify gorillas. And Microsoft suffered a black eye when its AI-based Twitter bot Tay became racist after just a few hours in use.
Yesterday, San Francisco became the first major U.S. city to ban most uses of facial recognition software by city agencies in part because of potential bias in the technology. Several smaller municipalities have passed or are considering similar bans.
While these AI missteps were very widely reported, many people worry that more pervasive -- and insidious -- AI offenses may be occurring behind the scenes without public knowledge. Customers might never know if they were denied a loan or faced fraud suspicion because of an AI algorithm that was ethically dubious.
Organizations like AI Now Institute at New York University and even the Southern Baptist Convention have called for companies using AI to become more transparent and agree to follow some ethical principles. In response, some enterprises, including both Google and Microsoft, have published their internal guidelines governing AI use.
However, many people feel that this doesn't go far enough. Instead, they want government bodies to get involved and issue regulations. And it's not just consumers that feel that way. In the Vanson Bourne study, 87% of business IT leaders said that AI development should be regulated.
Part of the reason for that desire among IT leaders is that in the absence of laws, enterprises have no way to know if they are doing enough to ensure that their use of AI is ethical. Regulation might give them some ability to reassure customers about their use of artificial intelligence, because they could say that they were in compliance with all relevant laws. Without those laws, gaining -- and keeping -- customer trust could be more difficult.
But even without regulation, business can -- and should -- be taking steps to ensure that their use of AI is ethical. The following slides offer nine things companies can do to improve their ethical stance when it comes to AI.
Cynthia Harvey is a freelance writer and editor based in the Detroit area. She has been covering the technology industry for more than fifteen years. View Full Bio