ChatGPT: An Author Without Ethics
If you're offended by AI-generated content, who should you blame? That's just one of many questions surrounding ChatGPT.
Controversial authors such as Orwell, Nabokov, Swift, and Rushdie, among many others, have shouldered substantial criticism over the years. Opposing prevailing opinion certainly isn't a game for weaklings. But who should take the blame when a computer ticks people off?
That's one of the many questions that remains to be answered as ChatGPT enters the business mainstream.
The Technology
ChatGPT is a generative AI capability, trained by OpenAI, which can be used to create unique and custom written content based on user prompts, usually written as text-based requests, explains Wayne Butterfield, a partner with technology research and advisory firm ISG. He notes that the technology marks a huge leap forward in natural language generation (NLG), the process of transforming data into natural language via AI. “From writing rap verses in the style of your favorite artist about your favorite food to 2,000-word articles on a scientific topic ... this has generated more buzz and widespread usage -- even outside the AI community -- than anything else I’ve seen to date.”
ChatGPT is an impressive standalone product, but businesses should focus on the underlying technology, generative AI, suggests François Candelon, global director of the BCG Henderson Institute, Boston Consulting Group's research think tank. “Generative AI refers to programs that use foundation models, trained on massive amounts of broad data, to generate new content that mimics human-like responses,” he says. ChatGPT, for instance, was trained almost entirely with publicly available data on the Internet. This broad data set allows for transfer learning, through which the model can acquire hidden patterns and use that knowledge to take on unrelated downstream tasks. “For example, ChatGPT learns the concepts of ‘marketing’ and ‘pizza’ to create a marketing slogan for a pizza shop when prompted,” Candelon says.
Beyond marketing, generative AI is attracting the attention of organizations in numerous creative industries, which are turning to the technology for various tasks, such as story generation, animation, and speech dubbing. “However, as the technology matures, all industries in which knowledge can be digitized, such as biopharma, engineering, manufacturing, or even consulting, will be transformed by generative AI,” Candelon predicts. “Organizations -- regardless of size and complexity -- within these industries will develop novel applications of generative AI, thanks to its democratizing nature.”
Lucky Gunasekara, founder and CEO of Miso.ai, a search personalization service provider, reports that his firm is already using ChatGPT to help write blog posts, landing pages copy, e-mail campaigns, and sales outreach content. “In essence, ChatGPT is a great tool for mere humans to use to accelerate their work and broaden their reach creatively,” he notes. “Writer’s block is tough when you’re on a deadline, and ChatGPT can help massively in what I’ve found to be fun and pleasant ways.”
Ethical Concerns
ChatGPT's biggest drawback is that it frequently makes mistakes or arrives at incorrect conclusions. “Any business that experiments with ChatGPT must understand what it gets wrong, as well as what it gets right,” says Mike Loukides, vice president of content strategy at IT training and publishing firm O’Reilly Media
Another limitation is that ChatGPT is not inherently creative. “It's excellent at digesting information that's already out there,” Loukides says. “I've seen scientists discussing its ability to summarize research, but it doesn’t yet have the imagination to create genuinely new ideas.”
Gunasekara believes that while ChatGPT is a compelling tool for individuals, it also poses a threat to newsrooms, publishers, and independent authors, as well as anyone trying to make a living by providing expert, curated, and accurate writing. “If OpenAI presented a model where they would cite and even compensate the original source creators and publishers, I’d be less disturbed, but I can’t see that happening,” he notes.
On the positive side, compared to other AI-driven chat systems, ChatGPT tends to shy away from spewing hate speech and grotesquely bad advice, Loukides says. Yet it's still far from perfect. “For example, it's become something of a game to get ChatGPT to tell you how to commit crimes,” he says. “It will refuse to tell you how to do something that's against the law, but if you tell it you're writing a novel, and need a convincing way to commit a crime, it will comply.”
Researchers at security vendor Check Point reported this week that they have spotted cyber attackers already using ChatGPT to help them write malicious code.
Content Rights and Responsibilities
The issue of ChatGPT content ownership is complicated and contentious. ISG's Butterfield believes that the potential for trained language models to infringe on intellectual property (IP) rights is strong, since in many cases owned content trained the model in the first place. “As these systems become more widely used, there's a risk that they may inadvertently or intentionally generate content that infringes on the IP rights of others, or that is duplicative of other AI-generated content” he explains. “This really could become a minefield and is why we won’t have many enterprises adopting the technology for their own use until all of this is resolved.”
The issue of who owns ChatGPT output and who economically benefits from it hasn't yet been resolved, BCG Henderson Institute's Candelon says. Similar questions surround content moderation. “Who is responsible if generative AI exponentially increases misinformation or generates inappropriate content?” he asks. Another dilemma is who holds responsibility if generative AI makes a mistake in a high-stakes environment, such as medicine? “These and other issues will be debated by society in the coming years,” Candelon predicts.
Into the Future
ChatGPT is currently based on the GPT-3 language model. Meanwhile, GPT-4 is on the way, promising even stronger capabilities. “Facebook/Meta and Google have built their own chat engines with capabilities similar to ChatGPT,” Loukides says. “At the same time, training these extremely large models and operating them after they've been built is extremely expensive and beyond the capabilities of most companies.”
Loukides believes that there will soon be smaller models that are equally effective yet focus on just a particular area or market. “Such models will ... run on hardware that a typical company can afford, with response times that are acceptable to their users,” he says.
What to Read Next:
ChatGPT: Enterprises Eye Use Cases, Ethicists Remain Concerned
Building a Chatbot That Humans Will Actually Like
Quick Study: Artificial Intelligence Ethics and Bias
IBM’s Krishnan Talks Finding the Right Balance for AI Governance
About the Author
You May Also Like