9 Steps Toward Ethical AI
Few current laws address the use of artificial intelligence. That puts companies under greater pressure to reassure the public that their AI applications are ethical and fair.
![](https://eu-images.contentstack.com/v3/assets/blt69509c9116440be8/blted15f271771e865b/64cb4e122a060b2c3247da51/00AIEthics.jpg?width=700&auto=webp&quality=80&disable=upscale)
As artificial intelligence becomes more commonplace in the enterprise, more IT leaders are becoming concerned about the ethical implications of AI. In fact, a 2019 Vanson Bourne report sponsored by SnapLogic found that 94% of the 1,000 U.S. and UK IT decision makers surveyed believe that people should be paying more attention to corporate responsibility and ethics in AI development.
You don't have to look hard to find some reasons for their concern. Several prominent tech companies have become embroiled in scandals after AIs they created failed to behave as desired. For example, in 2015, Google drew criticism after users complained when its image recognition software was tagging black human faces as "gorillas." Although the tech giant promised to fix the problem, three years later the only "fix" it had in place was to remove the AI's ability to identify gorillas. And Microsoft suffered a black eye when its AI-based Twitter bot Tay became racist after just a few hours in use.
Yesterday, San Francisco became the first major U.S. city to ban most uses of facial recognition software by city agencies in part because of potential bias in the technology. Several smaller municipalities have passed or are considering similar bans.
While these AI missteps were very widely reported, many people worry that more pervasive -- and insidious -- AI offenses may be occurring behind the scenes without public knowledge. Customers might never know if they were denied a loan or faced fraud suspicion because of an AI algorithm that was ethically dubious.
Organizations like AI Now Institute at New York University and even the Southern Baptist Convention have called for companies using AI to become more transparent and agree to follow some ethical principles. In response, some enterprises, including both Google and Microsoft, have published their internal guidelines governing AI use.
However, many people feel that this doesn't go far enough. Instead, they want government bodies to get involved and issue regulations. And it's not just consumers that feel that way. In the Vanson Bourne study, 87% of business IT leaders said that AI development should be regulated.
Part of the reason for that desire among IT leaders is that in the absence of laws, enterprises have no way to know if they are doing enough to ensure that their use of AI is ethical. Regulation might give them some ability to reassure customers about their use of artificial intelligence, because they could say that they were in compliance with all relevant laws. Without those laws, gaining -- and keeping -- customer trust could be more difficult.
But even without regulation, business can -- and should -- be taking steps to ensure that their use of AI is ethical. The following slides offer nine things companies can do to improve their ethical stance when it comes to AI.
At a minimum, organizations need to make sure that their AI applications comply with data privacy regulations. Most AI software, and machine learning in particular, depends on vast quantities of data for its operation, and enterprises are responsible for ensuring that their handling of that data complies with the law. In the U.S., enterprises might need to comply with the Health Insurance Portability and Accountability Act (HIPAA), the Children's Online Privacy Protection Act (COPPA) or other federal or state laws.
No matter where they are located, if organizations have any European customers or employees, they must also comply with the General Data Protection Regulation (GDPR). This sweeping legislation requires, among other things, that organizations store data for the shortest time possible and that they give individuals a way to view and delete their personal data. Most significantly in terms of AI, GDPR also says that "individuals should not be subject to a decision that is based solely on automated processing (such as algorithms) and that is legally binding or which significantly affects them."
Besides complying with government regulations, organizations also should practice good science. In its report "The Ethics of AI: How to Avoid Harmful Bias and Discrimination," Forrester recommends, "To prevent algorithmic bias in models, you need to adhere to fundamental principles of data mining by ensuring that your training data is representative of the population on which you plan to use the model."
Other experts recommend that data scientists repeatedly test and validate their models and that they maintain a way to track data lineage. While few business executives understand the intricacies of machine learning, they have an obligation to make sure that their data scientists are complying with industry best practices.
For decades, science fiction writers have been warning of the potentially apocalyptic dangers of artificial intelligence. Now that AI is becoming widespread, it's important not to dismiss the possibility of AI harming humans just because it's something that has appeared in books and movies. Applications like autonomous vehicles and AI-based weapons systems clearly impact human safety, and it is up to their designers to ensure that these systems are as safe as possible. And while other machine learning systems might not directly affect human beings' physical safety, they could have a big impact on their privacy and online security.
One of the most important steps that enterprises can take with regard to AI ethics is to make sure that they fully understand the operation of any AI systems they use. In an email interview, Ryan Welsh, CEO of AI vendor Kyndi said, "To realize the full potential of AI, trust is crucial. Trust comes from understanding -- and being able to justify -- the reasoning behind a system’s conclusions and results. AI cannot be a black box, as it so often is today." He added, "For AI to thrive, it needs to be explainable."
AI Now Institute adds, "For meaningful accountability, we need to better understand and track the component parts of an AI system and the full supply chain on which it relies: that means accounting for the origins and use of training data, test data, models, application program interfaces (APIs), and other infrastructural components over a product life cycle."
Machine learning systems are only as good as the data on which they depend. For many organizations, data quality continues to be a major concern. In the worst-case scenario, bad data could lead organizations to make inaccurate and even ethically compromised decisions.
On the other hand, having accurate, up-to-date data increases the likelihood that AI applications will yield positive financial benefits. In an email interview, James Cotton, international director of the Data Management Centre of Excellence at Information Builders, said, "The returns are far greater when analytics is applied to well managed data. Being transparent about knowing what you have, where it came from and how it is being used leads to significantly larger returns."
Data scientists not only need to make sure their data is clean, but they also need to make sure their data and data models don't contain any inherent bias. This problem can inadvertently creep into machine learning models in a couple of different ways.
First, organizations may have incomplete training data. For example, if you train your facial recognition system only with European faces, you shouldn't be surprised when the system has difficulty recognizing African or Asian faces.
Second, many data sets include historical bias. For example, some professions, like nursing or engineering, have traditionally been dominated by one gender. If you train your AI-based HR system to select candidates for interviews based on this historical data, you could accidentally end up perpetuating stereotypes and possibly even violating anti-discrimination laws.
Any decent data scientist will tell you that no data model is perfect. The best they can hope for is to make improvements over time. That means real live human beings need to be monitoring the systems with an eye to looking for potential ethical problems. Many of the organizations that have published AI guidelines say that all AI needs to be accountable to human beings. But some add that mere accountability isn't enough; the human supervisors need to be actively tracking the AI's actions and interactions with humans and making adjustments as necessary to make sure that the technology does not cross any ethical boundaries.
On a related note, you need to be able to "rewind" any decision an AI makes. Forrester recommends that AI systems be fundamentally sound, assessable, inclusive and reversible (which spells FAIR). That means designing systems to be modifiable, and it may also necessitate having human appeal boards for things like credit decisions.
AI Now takes this idea one step further, saying, "Organizing and resistance by technology workers has emerged as a force for accountability and ethical decision making. Technology companies need to protect workers’ ability to organize, whistleblow, and make ethical choices about what projects they work on. This should include clear policies accommodating and protecting conscientious objectors, ensuring workers the right to know what they are working on, and the ability to abstain from such work without retaliation or retribution."
Several tech companies, including Google, have formed advisory councils to help guide their use of AI. While having a group of outsiders oversee your artificial intelligence efforts may help to build public trust, it also has the potential to backfire. Some companies have come under criticism for appointing AI ethics board members that some people find objectionable.
While these boards may be flawed, in the absence of regulation or industry standards, they may represent companies' best opportunity to convince the public that someone without a vested interest is overseeing AI development.
Several tech companies, including Google, have formed advisory councils to help guide their use of AI. While having a group of outsiders oversee your artificial intelligence efforts may help to build public trust, it also has the potential to backfire. Some companies have come under criticism for appointing AI ethics board members that some people find objectionable.
While these boards may be flawed, in the absence of regulation or industry standards, they may represent companies' best opportunity to convince the public that someone without a vested interest is overseeing AI development.
-
About the Author(s)
You May Also Like