The Future of Life Institute sparked debate with its AI moratorium letter. What would be a good premise for reviewing AI usage in the world?

Pierre DeBois, Founder, Zimana

May 1, 2023

5 Min Read
Close Up Of A Black Pause Button
devenorr via Alamy Stock

Managers are exploring the capabilities of AI platforms at a rapid pace but using generative artificial intelligence can feel like that mystery holiday or birthday gift that has no clear purpose. What should we, the world, do with it?The Future of Life Institute (FLI) shared their suggestion via their well-publicized open letter to artificial intelligence labs calling for an immediate pause on training AI systems “more powerful than GPT-4” for at least six months. But it may be IT professionals across the globe who are in the best position to rally the right people to discover the way to best use generative AI’s gifts.

A Moratorium for Generative AI

Over a month has passed since the FLI published its letter, with over 26,000 signatures to date. It was written to arrest unfettered progress so that meaningful guidelines, through government regulation or cooperative agreement, can establish which tasks generative AI should manage for every aspect of life.

The moratorium was meant to address what the FLI perceives as platform leaders failing to orchestrate responsible development. FLI believes people are applying Dunning-Kruger-ish overconfidence toward AI-managed tasks without truly understanding how to best manage the programming ethically. 

The FLI further calls for a body of independent experts to jointly establish and routinely audit a set of shared safety protocols for advanced AI design and development. They believe “these protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.”

But the moratorium raises two critical questions that overlook real-world challenges in its design.

First, is six months enough time to unilaterally form safety protocols that are safe beyond a reasonable doubt?

Second, and more critical long-term: Who determines what constitutes as “risky,” along with the yardstick against which the degree of risk is measured?

The FLI placed an emphasis on AI labs and independent experts to be leaders through their technology. Yet technical expertise is not the challenge; the real need is gaining broader insights to the key outputs AI usage brings. An emphasis on technology solving public problems overlooks other important issues where AI usage should be questioned.

To craft effective responses in non-tech or social issues, an advisory group must incorporate stakeholders with diverse expertise from public policy and other impacted domains. Insights from additional domains create a more effective identification of what constitutes as risky research, bad consequences of usage, and offer effective guidelines in a manner more sophisticated than the FLI letter proposals.

Data ethicists have been raising similar suggestions for a long time. Brandeis Marshall, author of the book Data Conscience, noted what decision makers are recognizing -- that technologists will not be the sole source of solutions. She says in her Medium post Lost in AI: “There’s an implicit and explicit trust given to the tech ‘leaders,’ which amount to tech solutionism. Tech solutionism is the idea that tech can and will solve all problems.” 

Notable tech figures with hands-on experience in large language model (LLM) development have also offered similar cautions. Timnit Gebru, former Google lead data scientist, has been vocal on the limitations of LLMs long before her well-publicized ouster from Google. She co-wrote an op-ed article in the Washington Post and co-authored a paper on the topic.

Leading for varied domain viewpoints implicitly calls for the rapidly emerging AI era to move beyond its software roots. AI development is another form of software development. For years, developers applied a beta-level philosophy in launching products and services based on software -- to quickly “move fast and break things.” Against rapid AI adoption, people must now reevaluate this beta philosophy at all levels of operations. Myopic tech solutionalism can break services that government agencies -- and everyday citizens -- rely upon.

Where Should IT Professionals Get Involved?

IT professionals should leverage their roles to guide organizations from overinvesting in tech adoption into outreach efforts for domain expertise.Doing so allows good technology viewpoints that acknowledge where tech adoption, including generative AI, falls short and other solutions should be applied. Organizations facing new tech and industry trends typically turn to their IT teams for immediate answers. IT professionals, who usually vet technological answers within their organizations against management and workforce concerns, have the right capabilities to know when outreach to other experts is needed.

Tech adoption differs by organizational structure and industry -- B2B firms and government agencies have traditionally adapted tech discoveries slowly. But AI is quickly being infused into workflows of all kinds. AI and data ethics expert Ravit Dotan noted in a recent AI ethics presentation increased AI spending in all organizations, including government agencies.

This kind of global investment has raised government response to regulate AI. Italy, for example, has temporarily banned ChatGPT as a response to a data privacy investigation over the use of Italian users’ data that may be a privacy regulation breach. ChatGPT creator OpenAI is expected to work with regulators to resolve the concerns. Meanwhile regulation is also on its way.European Union has already circulated its draft AI legislation.

In the United States, Senate Majority Leader Chuck Schumer (D-N.Y.) is seeking government and corporate feedback for a four-point legislation framework for transparency of training data, model development, and ethical decision making. Senator Schumer is not the only source of US legislation. Brandeis Marshall compiled a terrific list of several US-based legislations already in development, explaining that the groundwork for data privacy informed the groundwork in AI legislation. IT professionals should focus their attention on imagining how the latest legislations are forming against their tech.

The widespread generative AI adoption among millions of people and businesses represents the tremendous technological shift of experiencing new technological benefits from specialists to the society at large. But the paradigm shift also ushers an urgent need for establishing agreed responsibility among all users.

Thus, a moratorium must do more than declare a period to address guidelines. It must encourage technologists such as IT professionals to be agents that gather a blend of experiences into ethics and usage initiatives. Doing so can ensure people of all domains go beyond philosophical debates towards real considerations for their organizations and the world around them. If AI is poised to occupy an even more central place in the global infrastructure, then the true gift will be bringing the right resources to take time to think hard about who will control it.

What to Read Next:

Is Generative AI an Enterprise IT Security Black Hole?

Should There Be Enforceable Ethics Regulations on Generative AI?

What Just Broke?: Generative AI and the Extinction of Ideas

About the Author(s)

Pierre DeBois

Founder, Zimana

Pierre DeBois is the founder of Zimana, a small business analytics consultancy that reviews data from Web analytics and social media dashboard solutions, then provides recommendations and Web development action that improves marketing strategy and business profitability. He has conducted analysis for various small businesses and has also provided his business and engineering acumen at various corporations such as Ford Motor Co. He writes analytics articles for AllBusiness.com and Pitney Bowes Smart Essentials and contributes business book reviews for Small Business Trends. Pierre looks forward to providing All Analytics readers tips and insights tailored to small businesses as well as new insights from Web analytics practitioners around the world.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights