informa
/
3 MIN READ
News

ChatGPT: Enterprises Eye Use Cases, Ethicists Remain Concerned

Everyone seems to be testing OpenAI's new AI chatbot with commands and questions. So InformationWeek asked ChatGPT to write a news story about itself. Enterprises foresee a number of use cases, but ethicists are concerned about how real the responses seem.

Everybody seems to be testing ChatGPT right now, whether they are asking it to write a sonnet or short piece of fiction, asking it scientific questions such as the distance between the earth and the sun, telling it to write a short javascript program, or having it compare and contrast Hebrew and Yiddish languages. 

OpenAI.com released the AI chatbot publicly on November 30 and invited the public to check it out and provide feedback so that it could learn about its strengths and weaknesses. And the public has accepted the invitation. Ask ChatGPT a question, and the results often sound frighteningly real, even if they aren't always completely accurate. 

We were curious about ChatGPT, too, but we're also super busy getting ready for the holidays. So on December 13 we asked ChatGPT to write a technology news story about its introduction, enterprise use cases, and the ethical concerns that have been raised about it. Right off, it was clear that there were some issues, since it said that ChatGPT was released on December 13, not the correct date of November 30. 

ChatGPT's News Story

    "On December 13, 2022, the artificial intelligence research lab OpenAI announced the release of Chat GPT, a powerful language model that is capable of generating human-like text in real-time. The technology has already sparked excitement among enterprise businesses, who see a wide range of potential use cases for Chat GPT. However, ethicists have raised concerns about the potential misuse of the technology.

    One potential use case for Chat GPT is in customer service. With the ability to generate human-like text, Chat GPT could be used to automate responses to customer inquiries, freeing up human customer service agents to focus on more complex tasks. This could improve the customer experience, as well as reduce the cost of customer service for businesses.

    However, ethicists have raised concerns about the use of Chat GPT in customer service. They argue that it could lead to a loss of jobs for human customer service agents, and could potentially result in a worse customer experience if the technology is not able to adequately handle complex or sensitive inquiries.

    Another potential use case for Chat GPT is in journalism. The technology could be used to automatically generate news articles, saving journalists time and allowing them to focus on more in-depth reporting and analysis. However, ethicists have raised concerns that this could result in the spread of fake news, as Chat GPT could potentially be used to generate false or misleading information.

    Overall, Chat GPT is a powerful technology that has the potential to transform many industries. However, its use also raises significant ethical concerns that need to be carefully considered before it is widely adopted."

ChatGPT certainly was faster at writing a story about itself than I ever would have been, clocking in at just about a minute. However, as an editor I'm sorry to see a lack of original quotes from enterprise users or analysts about how this might be applied in businesses. This story could also use some original quotes from the ethicists who have expressed concern about ChatGPT and a deeper look at how those ethical questions might come into play in the enterprise. I'd also like this story to provide some perspective for CIOs on the benefits and pitfalls of this type of technology. 

To wrap this up with a bit of humor, I asked ChatGPT to tell me a joke about an IT professional. Here's what it said: "Why was the computer cold? Because it left its Windows open."

What to Read Next:

Building a Chatbot That Humans Will Actually Like

Quick Study: Artificial Intelligence Ethics and Bias
IBM’s Krishnan Talks Finding the Right Balance for AI Governance

Editor's Choice
Brian T. Horowitz, Contributing Reporter
Samuel Greengard, Contributing Reporter
Nathan Eddy, Freelance Writer
Brandon Taylor, Digital Editorial Program Manager
Jessica Davis, Senior Editor
Cynthia Harvey, Freelance Journalist, InformationWeek
Sara Peters, Editor-in-Chief, InformationWeek / Network Computing