AI Investments We Shouldn't Overlook

Amid rapid advancements in AI, experts are placing great emphasis on how ethical the technology is. Here are three AI investments we shouldn’t overlook.

Reggie Townsend, Vice President, Data Ethics Practice

December 28, 2023

4 Min Read
Robot Hand Holding Empty Plate
Panther Media GmbH via Alamy Stock

The era of AI is no longer the future -- it's the here and now. We’re seeing it on the news, and it’s being incorporated into the technology we have at our fingertips every single day.

These rapid advancements are forcing leaders to ask themselves about the trustworthiness of their technology, raising questions globally about how to integrate these emerging tools responsibly.

What is the role of humans in making these decisions? Who is culpable for these decisions? How are these decisions being monitored?

Trust is central to meaningful relationships and civil societies, and to questions about AI investment. 

If you’ve tuned into the news lately, you’ve likely heard about AI inaccuracies and its potential for biases. Unfortunately, AI creates an opportunity to erode trust in unimaginable ways. 

We have a tension -- the need for trust and the possibility of eroding trust at scale. Organizations can’t force people to trust them, but they can behave in trustworthy ways.

To build that trust, providers must put in the work now, as trust begins before the first line of code.

AI Literacy Can Reduce Fear, Increase Adoption

Responsible innovation includes responsible rhetoric. Doomsday AI predictions of future catastrophes distract from the actual risks that exist now. Model bias can perpetuate historic injustices, deprive citizens of social benefits, or lead to unfair lending decisions.

Related:Making the Most of Generative AI in Your Business: Practical Tips and Tricks

We need an AI literate public to better understand the real challenges the currently face AI deployment. AI should be done for us and not to us and that requires people to grasp how the data they produce is used to train models and how AI is used to make decisions and influence opinion.

Those impacted by AI should fully understand the basics when it comes to the technology in order to ease concerns and get the greatest benefit from its capabilities. As such, investing in education and learning frameworks are important.

The recent Biden Administration Executive Order on Safe, Secure and Trustworthy Development of AI promotes increasing AI literacy in the federal workforce. That is essential, and ideally there will be efforts to expand this idea to the broader workforce and citizens not currently in the workforce as well.

Ideas to consider include a National AI Literacy Campaign, investing in formal educational or current learning frameworks to advance AI literacy, and investing in informal learning opportunities like individual public sessions, social media movements, and public messaging efforts.

Related:How Could AI Be a Tool for Workers?

A Call for Inclusive Contribution

AI is not a product -- it’s an ever-growing cycle of data usage, and people can be a huge factor in its failure. This leads us back to trust, as most people don’t trust the technology or the leaders working to regulate it.

According to Pew Research, 52% of Americans say they feel more concerned than excited about the increased use of artificial intelligence. Those concerns are particularly strong in communities historically underrepresented in the design and deployment of technology. Meaningful participation including communities of users, impacted, as well as creators will improve ethical inquiry, help reduce harmful biases, and build confidence in AI’s fairness.

To help allay these concerns, we need to have “seats at the table” for people with broader domain expertise. This is especially vital in areas such as health, finance, and law enforcement where bias has existed historically and is still a serious concern.

Additionally, we should consider funding the National Science Foundation’s National AI Research Resource Task Force, and similar efforts, to reduce the economic barriers of entry into AI professions.

AI can’t just be the world of data scientists. We must support traditional and non-traditional workforce education paths for technical and non-technical talent in order to create more robust AI systems.

Related:ChatGPT and the Great App-ocalypse

Demonstrating Trust

Just like we have food nutrition labels, we should have comprehensive labels, or model cards, to summarize the properties and performance of AI models. This will give both technical and non-technical users a simple view of the lifecycle of a model, key variables for its data sources, accuracy and so on.

Trustworthy AI is an end-to-end process that should provide the means to measure and monitor performance, from the idea phase to the end of a model’s usefulness. AI models should be auditable, with easily digestible reports showing if a model exceeded or underperformed its intended use. With an emphasis on fairness and explainability, a trustworthy AI approach to model management will help providers identify potential bias risks throughout the AI lifecycle until it’s retired.

Remember the saying -- it’s always easier to keep someone’s trust than try to win it back once it’s gone. Government leaders have taken thoughtful, productive steps toward prioritizing and investing in AI.

By building AI literacy and involving more people in the development and deployment of AI, supported by trustworthy AI capabilities, we can increase trust in AI and speed more widespread, responsible adoption.

About the Author(s)

Reggie Townsend

Vice President, Data Ethics Practice, SAS

As Vice President of Data Ethics at global analytics and AI provider SAS, Reggie Townsend leads a global effort to empower employees and customers to deploy data-driven systems that promote human well-being, agency and equity. He has more than 20 years of experience in strategic planning, program management, consulting and business development, with a focus on advanced analytics, cloud computing and AI. He combines this expertise with a passion for equity and human empowerment. Townsend serves on the U.S. National Artificial Intelligence Advisory Committee (NAIAC) and sits on the board of EqualAI.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights