Generative AI and Building a More Trustworthy Digital Space

In a world already flooded with misinformation, businesses eye ethical use guidelines to implement generative artificial intelligence technologies more responsibly.

Nathan Eddy, Freelance Writer

April 18, 2023

6 Min Read
generative artificial intelligence concept
Luis Moreira via Alamy Stock

While generative AI has broken through as a creative technology for content generation, it also presents real questions about ethics and responsible deployment.

In March, a Salesforce survey of more than 500 senior IT leaders found that while a third of respondents named it as a top development priority for the next 18 months, almost two-thirds (63%) saw bias in generative AI outputs.

Three in 10 survey respondents said their businesses must put in place ethical use guidelines to successfully implement generative AI within their business.

Mis- or disinformation at scale, plagiarism and copyright infringement, privacy breaches, fraud and misrepresentation, and an inability to guarantee factual or truthful output are just a few practical ethical issues generative AI technology raises.

“Generative AI, as with any AI, has the potential to unintentionally discriminate or disparage and cause people to feel less valued based on the data it’s trained on,” explains Andy Parsons, senior director of the Content Authenticity Initiative at Adobe. “That’s why it’s critical that AI is developed responsibly, including training on safe and inclusive datasets and conducting rigorous and continuous testing with feedback mechanisms.”

In addition, generative AI raises concerns over its ability to generate convincing synthetic content in a world already flooded with misinformation.

“This is why we must provide transparency about the content that generative AI models produce, to empower creators and provide consumers with information to decide whether they want to trust a piece of content or not,” Parsons adds.

He says it’s imperative that platforms, publishers, tool providers, and governments work together to build a more trustworthy digital space.

“We must also continue to educate creators about responsible use and consumers about being discerning of digital content,” he notes. “By sharing best practices and adhering to standards to develop generative AI responsibly, we can unlock the incredible possibilities this technology holds.”

Developing Actionable AI Ethics

Having a set of actionable AI ethics principles and a formal review process embedded in a company’s engineering team can help ensure that AI technologies are developed responsibly and in a way that respects customers and communities.

Kimberly Nevala, strategic advisor and advisory business solution manager at SAS, says there is a steadily growing community of AI advocates, researchers and practitioners, many of whom have been raising the alarm and advocating for meaningful governance for years.

“Organizations deploying these systems do not just highlight potential misuse or harms but are proactive in avoiding or mitigating foreseeable harm or risk by whatever means necessary,” she explains. “Up to and including not deploying systems when their safety or resiliency cannot be reliably guaranteed.”

She adds, however, that this is not likely to occur until it is required to occur, which is why creation and exercising of meaningful legal and regulatory guardrails is crucial.

For better or worse, markets and marketing still drive much of the broad public discussion today,” Nevala says. “However, the tide is slowly shifting.”

Anurag Malik, chief technology officer at ContractPodAI, says when a company decides to embrace AI technology, they need to do so under a structured framework with clearly defined value-based principles. “This is what creates transparency and establishes trust between the customer, end-user and the company,” he says. “These values should encourage teams to follow precedents and standardized processes when integrating AI models into existing processes -- there’s no need to completely reinvent the wheel.”

Malik adds that measurement is also important; for example, tracking performance of AI across core business functions and using real-time benchmarking analytics. “This can help provide a holistic view of where the technology is making an impact and ensure this technology is being used in an ethical and responsible manner,” he says.

Brian Platz, CEO and co-founder of Fluree, says on an individual business level, it’s important for executives who make business decisions to clearly understand all concerns over generative AI adoption. “These may come from employees who are unfamiliar with the technology and may not be comfortable with using it, or stakeholders who want to keep it out of their operations,” he explains.

Business leaders must establish boundaries for tools like ChatGPT in their business to avoid concerns like plagiarism or bias impacting their work. “The fundamental problem is that existing tools like ChatGPT use algorithms that aggregate any data, including potentially untrustworthy information without vetting for accuracy, bias, quality or meaning,” he says.

The impact can be that a ChatGPT output can sound incredibly fluent, but contain serious inaccuracies, which creates other concerns around plagiarism.

“If AI comes up with content that is used verbatim, or even is used as inspiration, the ethics of using that content as if it is original are in question,” Platz says.

As AI Grows More Powerful, New Challenges Arise

A recent open letter drafted by Tesla and Twitter owner Elon Musk and a group of AI experts and industry executives called for a six-month pause in developing AI systems more powerful than Microsoft-backed OpenAI's GPT-4, warning of “profound risks” to society and humanity.

Parsons says like all powerful technologies, AI comes with its own unique set of challenges.

“Whether or not a pause on development is realistic, we all need to be thoughtful about the future and ask ourselves – what is the implication of this technology on our customers and society? What can we do now to stay ahead of the challenges?” he asks. “That's the foundation on which we operate, and we believe that’s the foundation on which all companies should operate.”

He adds it is incumbent on all stakeholders to be thoughtful about how AI technologies are brought to market -- from sourcing content properly to providing transparency for consumers and creators.

“Ultimately, we believe that AI done right will amplify human creativity and intelligence to new levels – but the human creator should remain at the core of everything we do,” Parsons says.

From Nevala's perspective, it's not realistic to believe development of AI -- generative or otherwise -- can be halted full stop. “That does not mean the horse is out of the barn and we have no choice but to hang onto the reins as best we can,” she adds.

She says substantive progress will require a spectrum of actions including but not limited to substantive regulatory/legal governance, more research into creating transparent, reliable, and controllable AI systems and restricting certain usages.

“This is along with, I hope, an increased focus on discussing the philosophical, social, and political ramifications of these sociotechnical systems,” Nevala says.

What to Read Next:

Microsoft Brings Generative AI to Low-Code Platform

What Just Broke?: Digital Ethics in the Time of Generative AI

ChatGPT: Enterprises Eye Use Cases, Ethicists Remain Concerned

About the Author

Nathan Eddy

Freelance Writer

Nathan Eddy is a freelance writer for InformationWeek. He has written for Popular Mechanics, Sales & Marketing Management Magazine, FierceMarkets, and CRN, among others. In 2012 he made his first documentary film, The Absent Column. He currently lives in Berlin.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights