ChatGPT Use Sparks Code Development Risks

While there are substantial efficiency benefits from using AI-generated code in the development process, it must be tested to minimize risks.

Nathan Eddy, Freelance Writer

November 24, 2023

4 Min Read
chatgpt digital illustration/concept
Ömer_Faruk_Ordulu via Alamy Stock

At a Glance

  • IT leaders should prioritize human oversight and continuous monitoring and combine these measures with testing, code reviews.
  • Organizations should leverage AI-generated code as a starting point but tap human developers to review and refine the code.
  • A survey of 500 US developers found more than 67% of respondents admitted to pushing code to production without testing.

Pushing AI-generated code without proper testing can lead to several risks and consequences for organizations.

Untested code may introduce bugs and errors, leading to crashes and poor performance, may not meet quality standards or best practices nor adhere to regulatory standards and compliance requirements, potentially causing legal and financial repercussions.

To ensure high-quality code, IT leaders should prioritize human oversight and continuous monitoring and combine these measures with thorough testing and code reviews, ensuring AI-generated code aligns with security protocols.

“Conduct testing when, where, and how your developers work,” Scott Gerlach, co-founder and CSO StackHawk, says. “Consider testing requirements early on and involve all key stakeholders in the process design to ensure buy-in.”
He recommends making testing an integral part of the software development lifecycle by automating testing in continuous integration and continuous delivery (CI/CD) while developers are working on the code.

“Educate your developers through targeted training based on patterns within the context of their code and importance to the business,” he adds. “You also need to provide self-service tooling that helps developers understand the issues that arise, why they’re important, and how to recreate the problem so they can fix and make and document decisions.”

Related:Weekend OpenAI Drama Ends with Huge Microsoft Power Play

Jim Scheibmeir, Gartner senior director analyst, explains via email that using code from AI coding assistants has similar risk as when developers copy and paste code out of Stack Overflow or other internet resources.

“We need to use AI coding assistants to generate code documentation, so understanding and knowledge of the solution is improved as well as accelerated,” Scheibmeir says.

Human-Centric Code Review Processes

Randy Watkins, CTO at Critical Start, advises organizations to build their own policies and methodology when it comes to the implementation of AI-generated code into their software development practices.

“In addition to some of the standard coding best practices and technologies like static and dynamic-code analysis and secure CI/CD practices, organizations should continue to monitor the software development and security space for advancements in the space,” he told InformationWeek via email.

He says organizations should leverage AI-generated code as a starting point but tap human developers to review and refine the code to ensure it meets standards.

John Bambenek, principal threat hunter at Netenrich, adds leadership needs to “value secure code”, make sure that at least automated testing is part of all code going to production.

Related:To AI Hell and Back: Finding Salvation Through Empathy

“Ultimately, many of the risks of generative AI code can be solved with effective and thorough mandatory testing,” he noted in an email. 

He explains as part of the CI/CD pipeline, ensure mandatory testing is done on all production commits and routine comprehensive assessment is done on the entire codebase.

“Maintain an inventory of used software libraries to enable checking for updates or inclusion of typosquatted packages, and secrets management to keep keys and credentials out of code repositories,” Bambenek says.

Carving Out a Path of Clarity

A recent Sauce Labs survey of 500 US developers found more than two-thirds (67%) of respondents admitted to pushing code to production without testing, and six in 10 developers surveyed admitted to using untested code generated by ChatGPT.

Jason Baum, director of community at Sauce Labs, says it’s about leadership stepping up and carving out a path of clarity amidst the rush.

“With AI-generated code, we’re often flying blind on context and functionality, making thorough testing not just prudent, but essential to dodge financial and reputational bullets,” he explains. “When we set crystal-clear expectations, we’re not just fast-tracking code to market, we’re championing a culture where quality and security are revered, not compromised.”

Related:The Rise of Autonomous AI Agents

Baum says balancing AI efficiency with code quality is like expecting a fresh brew to be served straight from a coffee bean -- skipping the grind and brew is a no-go.

“Just as journalists wouldn’t let ChatGPT publish an article without a review, we can’t let AI-generated code slide into production without a thorough check,” he explains. “It’s about tutoring our developers and having a stout review net to catch the unseen, ensuring our code races to the finish line both swiftly and securely.”

Josh Thorngren, head of developer advocacy for ForAllSecure, agrees quality and security testing should be made as frictionless as possible and avoid taking developers out of the code/build/ship workflow. 

For example, if an organization runs a security testing tool during the CI process, developers should get the results of that tool via their issue tracker or CI tool -- they shouldn’t have to log into the security product to see results.

“We also must create a culture where the balance between quality and speed doesn’t always lean towards speed,” he adds. “These aren’t new challenges, but the speed of AI code generation magnifies their impact on security, stability and quality, increasing the challenge of each.”

About the Author(s)

Nathan Eddy

Freelance Writer

Nathan Eddy is a freelance writer for InformationWeek. He has written for Popular Mechanics, Sales & Marketing Management Magazine, FierceMarkets, and CRN, among others. In 2012 he made his first documentary film, The Absent Column. He currently lives in Berlin.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights