Why You Should Have an AI & Ethics Board

Guidelines are great -- but they need to be enforced. An ethics board is one way to ensure these principles are woven into product development and uses of internal data.

Jack Berkowitz, Chief Data Officer, ADP

January 24, 2022

6 Min Read
ethics chalkboard with people gathered around at table discussing
onephoto via Adobe Stock

Most businesses today have a great deal of data at their fingertips. They also have the tools to mine this information. But with this power comes responsibility. Before using data, technologists need to step back and evaluate the need. In today’s data-driven, virtual age, it's not a question of whether you have the information, but if you should use it and how.

Consider the Implications of Big Data

Artificial intelligence (AI) tools have revolutionized the processing of data, turning huge amounts of information into actionable insights. It’s tempting to believe that all data is good, and that AI makes it even better. Spreadsheets, graphs, and visualizations make data “real.” But as any good technologist knows, the old computing sentiment, “garbage in, garbage out” still applies. Now more than ever, organizations need to question where the data originates and how the algorithms interpret that data. Buried in all those graphs are potential ethical risks, biases and unintended consequences.

It’s easy to ask your technology partners to develop new features or capabilities, but as more and more businesses adopt machine learning (ML) operations and tools to streamline and inform processes, there is potential for bias. For instance, are the algorithms discriminating unknowingly against people of color or women? What is the source of the data? Is there permission to use the data? All these considerations need to be transparent and closely monitored.

Consider How Existing Law Applies to AI and ML

The first step in this journey is to develop data privacy guidelines. This includes, for example, policies and procedures that address considerations such as notice and transparency that data is used for AI, policies on how information is protected and kept up to date, and how sharing data with third parties is governed. These guidelines hopefully build on an existing overarching framework of data privacy.

Beyond privacy, other relevant bodies of law may impact your development and deployment of AI. For example, in the HR space, it is critical that you refer to federal, state, and local employment and anti-discrimination laws. Likewise, in the financial sector, there are a range of applicable rules and regulations that have to be taken into account. Existing law continues to apply, just as it does outside the AI context.

Staying Ahead While Using New Technologies

Beyond existing law, with the acceleration of technology, including AI and ML, the considerations become more complex. In particular, AI and ML introduce new opportunities to discern insights from data that were previously unachievable and can do so in many ways better than humans. But AI and ML are created ultimately by humans, and without careful oversight there are risks of introducing unwanted bias and outcomes. Creating an AI and Data Ethics Board can help businesses anticipate issues in these new technologies.

Begin by establishing guiding principles to govern the use of AI, ML and automation specifically in your company. The goal is to ensure that your models are relevant and functional, and do not “drift” from their intended goal unknowingly or inappropriately. Consider these five guidelines:

1. Accountability and transparency. Conduct audit and risk assessments to test your models, and actively monitor and improve your models and systems to ensure that changes in the underlying data or model conditions do not inappropriately affect the desired results.

2. Privacy by design. Ensure that your enterprise-wide approach incorporates privacy and data security into ML and associated data processing systems. For example, do your ML models seek to minimize access to identifiable information to ensure that you are using only the personal data you need to generate insights? Are you providing individuals with a reasonable opportunity to examine their own personal data and to update it if it’s inaccurate?

3. Clarity. Design AI solutions that are explainable and direct. Are your ML data discovery and data usage models designed with understanding as a key attribute, measured against an expressed desired outcome?

4. Data governance. Understanding how you use data and the sources from which you obtain it should be key to your AI and ML principles. Maintain processes and systems to track and manage data usage and retention. If you use external information in your models, such as government reports or industry terminologies, understand the processes and impact of that information in your models.

5. Ethical and practical use of data. Establish governance to provide guidance and oversight on the development of products, systems and applications that involve AI and data.

Principles like these can both guide discussion about these issues and help to create policies and procedures about how data is handled in your business. More broadly, they will set the tone for the entire organization.

Create an AI & Ethics Board

Guidelines are great -- but they need to be enforced to be effective. An AI and data ethics board is one way to ensure these principles are woven into product development and uses of internal data. But how can companies go about doing this?

Begin by bringing together an interdisciplinary team. Consider including both internal and external experts such as IT, product development, legal and compliance, privacy, security, audit, diversity and inclusion, industry analysts, external legal and/or an expert in consumer affairs, for instance. The more diverse and knowledgeable the team, the more effective your discussions can be around potential implications and the viability of different use cases.

Next, spend time discussing the larger issues. It’s important here to step away from process for a minute and immerse yourselves in live, productive discussion. What are your organization’s core values? How should they inform your policies around development and deployment of AI and ML? All this discussion sets the foundation for the procedures and processes you outline.

Setting a regular meeting cadence to review projects can be helpful as well. Again, the bigger issues should drive the discussion. For instance, most product developers will present the technical aspects -- such as how the data is protected or encrypted. The board’s role should aim to analyze the project on a more fundamental level. Some questions to consider for guiding discussion could be:

  • Do we have the rights to use the data in this way?

  • Should we be sharing this data at all?

  • What is the use case?

  • How does this serve our customers?

  • How does this serve our core business?

  • Is this in line with our values?

  • Could it result in any risks or harms?

Because AI and ethics has become an increasingly important issue, there are many resources to help your organization navigate these waters. Reach out to your vendors, consulting firms or trade groups and consortiums, like the Enterprise Data Management (EDM) Council. Implement the pieces that are appropriate for your business but remember that tools, checklists, processes, and procedures should not replace the value of the discussion.

The ultimate goal is to make these considerations a part of the company culture so that every employee that touches a project, works with a vendor or consults with a client, keeps data privacy front of mind.

About the Author

Jack Berkowitz

Chief Data Officer, ADP

Jack Berkowitz is Chief Data Officer for ADP, where he is responsible for ADP’s vision and approach to Artificial Intelligence, and the development of cloud-native machine learning solutions that span across ADP’s HCM product suites. With data across ADP’s population of nearly 30 million employee records in the U.S., ADP has a unique position in the market to deliver unmatched insights to clients.

Jack Berkowitz joined ADP in August 2018 as the Senior Vice President of Product Development for ADP® DataCloud, ADP’s people analytics and compensation benchmarking solution. Jack came to ADP from Oracle, where he was Vice President, Products and Data Science for Oracle’s Adaptive Intelligence program. In this role, he oversaw the market, technical and sales strategy for Oracle’s suite of next generation intelligent applications, combining web-data, data science and platform cloud computing. Previously, he oversaw product strategy for analytics in Oracle’s Cloud Applications, and product management and strategy for Oracle’s Analytics portfolio.

Prior to Oracle, Jack spent 20 years in both product development and implementation of intelligent information systems, most recently involved in Web-scale search and recommendation systems, data-driven applications, and the Semantic Web. He has been on the executive team of four startups involved in search, reasoning or meta-data driven applications (Attivio, Siderean, Cerebra, and Reef Software) and he co-founded edapta, which enabled dynamic user interfaces and personalization for mobile and web clients. Berkowitz has delivered solutions with a wide range of Global 50 clients from Financial Services, Consumer Web, and Healthcare. Early in his career, he was involved with DARPA and FAA sponsored programs for user-experience and intelligent systems, FAA Aviation Security, and the certification of the B777 flight deck.

Jack has a master's degree in industrial engineering and operations research from Virginia Tech, and a bachelor's degree in psychology from the College of William and Mary.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights