AI: Friend or Foe?

Adoption of AI continues, further fueled by generative AI. Like with all things tech, the hype needs to be tempered with a realistic expectation of results.

Lisa Morgan, Freelance Writer

April 12, 2024

4 Min Read
Machine vs human: AI robot and man facing each other.
ElenaBs via Alamy Stock

Prior to the pandemic, large consulting firms said their clients’ goal in adopting artificial intelligence was to replace some percentage of their workforce with AI and intelligent automation. Later, particularly given AI’s limitations, the message morphed into one of assistance: AI freeing workers from the shackles of boring, repetitive work and improving productivity. Now, some white-collar jobs are being replaced by AI for real. 

“AI is going to have unpredictable impacts and will undoubtedly replace some job functions. But in 2024, AI-related job cuts may be quite small,” says Brent Field, principal -- business consulting, artificial intelligence and automation practice at Infosys Consulting, in an email interview. “For the next few years, AI’s impact will be more like that of a team member with a unique specialty that boosts overall team productivity. AI is advancing quickly, but it is nowhere close to having some key elements of human intelligence that employees use every day.” 

The Generative AI Effect 

The explosive experimentation with ChatGPT, Bard and other large language models (LLMs) added another dimension seemingly overnight. Suddenly, average individuals could interact with AI to generate code, prose, and images. Meanwhile, some software vendors have been adding generative AI capabilities to their products, which is making the technology even more ubiquitous. 

 “We’re definitely not in the automation phase yet with generative AI. At the moment, it’s really a first draft creator, nothing more,” says Akhil Seth, head of open talent center of excellence at digital transformation solutions company UST. “Traditional AI has scaled in amazing ways, but with generative AI, it’s still very incremental -- a 2X increase in productivity at most and maybe less, like 50%. It still needs humans to moderate it, and it’s definitely not at the point where it can make decisions.”  

 UST is using generative AI to convert COBOL code to Java faster.  

 “I think a massive gray area that’s going to become black and white soon is government regulation,” says Seth. “It’s important for legal teams, corporations and CIOs to keep their finger on the pulse of [that] because it will shape what can and can’t be done using a template, especially the consumer-facing journey to AI. In every implementation team, there needs to be someone who understand how [generative AI] fits with regulation and legalities.” 

AI Means Different Things to Different Entities 

Global consulting firm Proviti divides AI into three areas to address the full scope of AI capabilities: users who consume AI in the form of tools, data scientists and others who create AI capabilities, and those who control (are responsible for governance, compliance and risk management). 

“The key thing to remember with responsible AI is establishing a culture of innovation with the guardrails of governance,” says Scott Laliberte, global leader of emerging technology at Proviti. “One of the first things to do is establish a governance committee and a steering committee with the right players at the table -- business and IT folks who can start to think about the use cases.” 

The governance team should include legal, compliance, security and other functions because they can help think through the risks, competitive advantages and growth opportunities. 

“One of the challenges is applying the framework controls to generative AI models. For instance, how do you [minimize] bias and [ensure] transparency when you really have no insight into that? How do you ensure governance over the training data of a gen AI LLM when you have no control over that, because that’s done by Microsoft or whoever?” Laliberte says. “You really need that steering committee looking at it holistically and considering all the aspects of it for the business.”  

Many businesses have AI training in place for all employees that explains AI basics, how to use AI in their jobs, acceptable use, and confidentiality. And, while there tend to be a lot of policies in place, employees need to be reminded of them from time to time to help reduce risk. 

According to a recent Deloitte survey, current generative AI efforts remain more focused on efficiency, productivity, and cost reduction than on innovation and growth. Those reporting “high levels of AI expertise” were more likely to say they are using AI strategically. 

While the report includes several insights, the one best summarizing the market today is this: “Deloitte’s experience suggests that most organizations have yet to substantially address the talent and workforce challenges likely to arise from large-scale generative AI adoption. A likely reason for this is that many leaders don’t yet know what generative AI’s talent impacts will be, particularly with regard to which skills and roles will be needed most.” 

About the Author

Lisa Morgan

Freelance Writer

Lisa Morgan is a freelance writer who covers business and IT strategy and emerging technology for InformationWeek. She has contributed articles, reports, and other types of content to many technology, business, and mainstream publications and sites including tech pubs, The Washington Post and The Economist Intelligence Unit. Frequent areas of coverage include AI, analytics, cloud, cybersecurity, mobility, software development, and emerging cultural issues affecting the C-suite.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights