ChatGPT Year One: The Drama & Disruption
As OpenAI's ChatGPT celebrates its first birthday, we look back at how it spurred stunning progress in generative artificial intelligence for consumers and businesses... and how it caused massive upheaval.
![ChatGPT concept ChatGPT concept](https://eu-images.contentstack.com/v3/assets/blt69509c9116440be8/blt0565bdb53ea3891a/656637d70c5c67040afc84c9/chat_gpt_birthday_lead_slide_2RF6W97.jpg?width=700&auto=webp&quality=80&disable=upscale)
Bildagentur-online via Ohde / Alamy Stock
Just five months after ChatGPT’s release, more than 1,000 tech leaders signed a letter calling for a pause on AI research citing risks to humanity. The open letter called for regulation on generative AI’s “profound risks to society and humanity.”
Released by nonprofit Future of Life Institute, the letter was signed by, among many others, Apple co-founder Steve Wozniak, Tesla CEO Elon Musk (who would go on to release his competing GenAI chatbot, Grok, just a few months later), 2020 presidential candidate Andrew Yang, Turing Prize winner Yoshua Bengio, along with an extensive list of CEOs and researchers in the field.
According to the letter, titled “Pause Giant AI Experiments: An Open Letter,” the lightning-fast pace of GenAI adoption had “AI labs locked in an out-of-control race to develop and deploy even more powerful digital minds that no one -- not even their creators -- can understand, predict, or reliably control.”
While the letter captured headlines around the world, a pause for GenAI development never materialized. But days later, Italy became the first nation to ban ChatGPT (temporarily) and other nations threatened to follow suit. Apple and many other firms banned internal use of ChatGPT over safety and security concerns. The coming weeks saw increased efforts calling for immediate regulations and safeguards for AI.
With the backdrop of increasing concerns about generative AI dangers, OpenAI CEO Altman appeared before US Congress in May to discuss AI dangers and possible regulations. He said lawmakers should intervene to create parameters to prevent AI creators from causing “significant harm to the world.”
He added, “I think if this technology goes wrong, it can go quite wrong, and we want to be vocal about that. We want to work with the government to prevent that from happening.”
Altman also pointed to AI’s tremendous potential to benefit humanity. “We think it can be a printing press moment,” Altman told lawmakers. “We have to work together to make it so.”
While Altman is known to be the driving force behind OpenAI’s commercial success, he was the one pleading for government intervention. “We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models,” he said, adding that licensing and testing requirements for AI would be considered. “US leadership is critical.”
Sen. Dick Durbin (D-Ill.) said major tech companies coming to Congress to ask to be regulated was a “historic” moment.”
Two days later, OpenAI released a new ChatGPT app for the iPhone.
In June, a pair of lawsuits aimed at Open AI hit the courts, with one claiming the company secretly used stolen data from unsuspecting users, and another accusing the company of stealing from copyrighted books.
The first lawsuit blamed OpenAI’s alleged data misuse on the 2019 restructuring that opened ChatGPT up to for-profit ventures. “As a result of the restructuring, OpenAI abandoned its original goals and principles, electing instead to pursue profit at the expense of privacy, security, and ethics. It doubled down on a strategy to secretly harvest massive amounts of personal data from the internet, including private information and private conversations, medical data, information about children, essentially every piece of data exchanged on the internet it could take -- without notice to the owners or users of such data, much less with any permission,” according to the lawsuit.
The first lawsuit went as far as to call for a temporary freeze on commercial use and development of OpenAI’s products until the company implemented more regulations.
The second lawsuit claimed ChatGPT’s machine learning was trained on books without permission from their others, saying ChatGPT’s machine learning dataset came from books and other texts that are “copied by OpenAI without consent, without credit, and without compensation.”
In November, OpenAI and Microsoft were hit with another ChatGPT-related lawsuit over alleged misuse of nonfiction authors. The lawsuit was led by Hollywood Reporter editor Julian Sancton, who said OpenAI copied tens of thousands of nonfiction books without permission. It’s the first time Microsoft has been named in a ChatGPT lawsuit.
“While OpenAI and Microsoft refuse to pay nonfiction authors, their AI platform is worth a fortune,” Sancton’s attorney Justin Nelson said in a statement. “The basis of OpenAI is nothing less than the rampant theft of copyrighted works.”
In late October, President Joe Biden signed an executive order on AI that provides landmark regulation that attempts to provide guardrails against harm while still allowing companies to reap rewards. With no bill on US lawmakers’ radar, the executive order provides the first regulations with teeth.
The order will use the Defense Production Act to require AI developers to share safety test results and other crucial information with the federal government. The National Institute of Standards and Technology (NIST) will create standards to ensure the safety of AI tools before they are released.
“AI is all around us,” Biden told reporters during the signing. “To realize the promise of AI and avoid the risk, we need to govern this technology.”
Liz Fong-Johnes, field CTO at software company Honeycomb, told InformationWeek the executive order will provide needed guidelines for organizations. “I think that this is going to be a major step towards ensuring … we’re not accidentally or deliberately using models for discriminatory purposes,” she said.
In one of the most dramatic developments in the tech industry in 2023, OpenAI’s board of directors on Nov. 17 fired CEO Sam Altman, only to reinstate him days later after intense pressure from main investor Microsoft and most of OpenAI’s employees.
With twists and turns playing out very publicly, OpenAI went through two interim CEOs within 48 hours, looked to bring Altman back, before Microsoft announced they would be hiring Altman and other OpenAI leaders for their own in-house AI group.
It took an open letter signed by more than 700 of OpenAI’s 750 employees demanding Altman’s reinstatement -- or else they would all quit -- before the board finally caved and brought Altman back as CEO.
“The process through which you terminated Sam Altman and removed [OpenAI president and board chair] Greg Brockman from the board has jeopardized all this work and undermined our mission and company,” the letter sad. “Your conduct has made it clear you did not have the competence to oversee OpenAI.”
The failed coup came to an end on Nov. 22 as the board announced they would be bringing Altman back. The deal saw the ouster of three board members, including OpenAI co-founder Ilya Sutskever. New board chair Bret Taylor, former Salesforce co-CEO, along with former US Treasury Secretary Larry Summers, were installed.
“I love OpenAI, and everything I’ve done over the past few days has been in service of keeping this team and its mission together… I’m looking forward to returning to OpenAI, and building on our strong partnership with Microsoft,” Altman wrote in a post on X.
The OpenAI board members that ousted Altman never gave a specific reason for the move but said that it had concluded “he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.”
But a Reuters story on Nov. 22 cited unnamed sources who said several researchers had written a letter to the board prior to Altman’s ouster that warned of a potent artificial intelligence discovery they said could pose a threat to humanity.
Reuters reported that after its story broke, an internal message to OpenAI staff confirmed a project named “Q*” and confirmed the existence of the letter to the board written before the Altman firing.
Altman alluded to a recent breakthrough at OpenAI in remarks at the Asia-Pacific Economic Cooperation summit a day before the board fired him. “Four times now in the history of OpenAI, the most recent time was just in the last couple of weeks, I’ve gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward, and getting to do that is the professional honor of a lifetime,” he told the audience.
While there are still lingering questions surrounding OpenAI's recent drama, one thing is clear: There will certainly be plenty of intrigue left to explore in ChatGPT's second year.
The OpenAI board members that ousted Altman never gave a specific reason for the move but said that it had concluded “he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.”
But a Reuters story on Nov. 22 cited unnamed sources who said several researchers had written a letter to the board prior to Altman’s ouster that warned of a potent artificial intelligence discovery they said could pose a threat to humanity.
Reuters reported that after its story broke, an internal message to OpenAI staff confirmed a project named “Q*” and confirmed the existence of the letter to the board written before the Altman firing.
Altman alluded to a recent breakthrough at OpenAI in remarks at the Asia-Pacific Economic Cooperation summit a day before the board fired him. “Four times now in the history of OpenAI, the most recent time was just in the last couple of weeks, I’ve gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward, and getting to do that is the professional honor of a lifetime,” he told the audience.
While there are still lingering questions surrounding OpenAI's recent drama, one thing is clear: There will certainly be plenty of intrigue left to explore in ChatGPT's second year.
When the history books are written, ChatGPT’s public unveiling on Nov. 30, 2022 could either mark the beginning of a brave new world filled with promising artificial intelligence (AI) breakthroughs impacting science, medicine, work, and virtually every aspect of life… or the end.
AI is certainly not a new concept and businesses have been using existing technologies for years. But ChatGPT’s launch for the first time enabled the public to generate human-like text with simple prompts, and a collective proverbial lightbulb turned on. Soon, businesses started conjuring real-world use cases that would give them a competitive advantage -- the seemingly limitless possibilities for multiple industries created excitement, as well as fear for the workers it could impact.
The quickly emerging technology spawned two camps: Those who wanted to test and advance GenAI for the benefit of humanity, and those who saw an existential threat to humanity. The latter were soon dubbed “doomers” as the GenAI hype cycle gathered ferocious momentum.
ChatGPT itself had reservations about rapid widespread use. When InformationWeek prompted the bot to write an article about istelf, it wrote, "Overall, Chat GPT is a powerful technology that has the potential to transform many industries. However, its use also raises significant ethical concerns that need to be carefully considered before it is widely adopted."
And this hype cycle proved different than previously hyped advancements (such as the unfulfilled hype created by the metaverse and widespread use of augmented reality that never really advanced beyond home gadget novelty status). ChatGPT, with its mix of promises and potential dangers, captured the public imagination.
In just a year, OpenAI’s flagship product helped bring it to an estimated valuation of $86 billion and ChatGPT had reached $1 billion in annual revenue through its paid commercial and business products.
Ron Guerrier, former HP CIO, tells InformationWeek that CIOs and other IT leaders will keep a close watch on further ChatGPT developments. “The saga happening at OpenAI is of great interest to many and especially the critical CIO role,” he says. “GenAI will undoubtedly change the landscape for all organizations and society at large, so the direction and pace are of utmost importance.”
In the following slides, InformationWeek takes a look at the first year of ChatGPT and the rise of GenAI for enterprise:
Read more about:
RegulationAbout the Author(s)
You May Also Like