AI Isn’t Fully Explainable or Ethical (That’s our Challenge)

With businesses often confused about what artificial intelligence is and can do for an organization, the role of defining AI falls to IT and data professionals.

Jing Huang, Senior Director of Engineering, Machine Learning

October 2, 2023

4 Min Read
person taming a lionretrorocke via Alamy Stock

Most new technologies worth their salt will inspire debate and be met with extensive discourse around their merits, morality, and potential impact. Artificial intelligence is no exception.

Fear, uncertainty, and doubt are natural. In fact, it would be strange not to see the extensive AI debate dominating our headlines today. Debate fuels questions that in turn prompt necessary and valuable investigation and verification of the evolving technology. This is a good thing.

The ethical development and use of AI lies in harmonizing human input and validation with machine-based findings. Let’s explore three things AI isn’t, to better understand what AI is.

AI isn’t perfect. There is a common misunderstanding that just because it’s called artificial intelligence, AI must be freakishly smart. The reality is: AI and ML technologies are not perfect, and being fallible forces them to learn from mistakes and misinformation to advance productively over time.

AI systems are programmed and trained by humans. This fact alone indicates the amount of error and inadvertent bias it can include from the start. Similar to any technology development process, checks and balances must be put in place.

Consider useful tools like ChatGPT. It’s not perfect, but if ChatGPT can help humans do something better and more efficiently, we should balance risk and reward. The history of technological evolution has shown us time and again that whenever human labor is replaced by technology, the freed human manpower gets translated into other areas.

Related:Dialing Down AI Risks While Getting Smarter About Its Uses

AI isn’t fully explainable (P.S. It’s not new!) The nature of the AI algorithm and how machines learn is not yet fully explainable. It’s similar to trying to explain how our brain works: The concept is understandable on a fundamental level, but the nuances and complexities are harder to comprehend.

One thing we know for sure is that we will always be learning alongside AI technology itself. And those in the technology business have been learning from AI for decades.

Here’s a little-known fact for most people: AI technology isn’t new. It was largely science fiction until the early 1950s when the concept of AI became widespread among modern thought leaders of the day. It has gradually been infiltrating nearly every aspect of our lives since then.

So, why is AI making headlines everywhere now? Today there is broader, large-scale access to AI and its advancements, lowering the barrier for entry to its use. Now that the technology is mature, there are more opportunities for enterprises -- and even consumers -- to leverage it.

Related:Weighing the AI Threat By Incident Reports

The question on everyone’s lips is: How can we use this “co-pilot” technology to re-evaluate our processes, improve efficiencies, and reimagine the work we’re doing? Let’s face it, some of us are also asking: Am I going to lose my job to AI?

AI isn’t logical or ethical, on its own. AI is a great impersonator, but it will never be a person. It only mimics human patterns. Human involvement and validation will always have a place in AI.

AI doesn’t have a true understanding of human logic or ethics, but this is also a domain we can never say never. Though, it can be used to help build stronger connections between people.

As more people interact with AI, the technology will learn to adapt, and we will continue to identify where the algorithms can be trusted and where human validation is still required. However, humans must be careful not to inadvertently incorporate our own biases in such a way that alters machine-based findings.

As AI matures and the tools it enables evolve, we expect to incorporate even greater human validation in AI systems and processes every step of the way -- not the other way around. Human roles will vary over time as the technology also evolves. As this occurs, we will learn how to perform our roles more effectively.

So, What’s Next?

AI isn’t perfect, fully explainable, or capable of understanding human logic or ethics; nor is any technology on the market today. The inherent fallibility of AI is necessary to its very existence.

AI relies on reinforcement learning to evolve and progress. The goal (and challenge) for humans in cultivating successful development and use of AI lies in striking the right balance between humanity and technology. Many experts across technology and regulatory bodies are hard at work on this today.

While AI isn’t fully explainable or ethical, let’s remind ourselves that this is a new kind of collective learning process for us all. We, and AI technology, are constantly advancing together.

About the Author

Jing Huang

Senior Director of Engineering, Machine Learning, SurveyMonkey

Jing Huang is Senior Director of Engineering, Machine Learning at SurveyMonkey. She leads the machine learning engineering team, with the vision to empower every product and business function with machine learning. Previously she was an entrepreneur who devoted her time to build mobile-first solutions and data products for non-tech industries. She also worked at Cisco Systems for six years, where her contributions ranged from security to cloud management to big data infrastructure.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights