The Intellectual Property Risks of GenAI

GenAI use is rampant, and enterprises are trying to scale it, but in the race to become more competitive, there are important IP risks to consider.

Lisa Morgan, Freelance Writer

November 1, 2024

9 Min Read
wildfire illustration
Imar de Waard via Alamy Stock

Generative AI’s wildfire adoption is both a blessing and a curse. On one hand, many people are using GenAI to work more efficiently, and businesses are trying to scale it in an enterprise-class way. Meanwhile, the courts and regulators aren’t moving at warp speed, so companies need to be very smart about what they’re doing or risk intellectual property (IP) infringement, leakage, misuse and abuse.  

“The law is certainly behind the business and technology adoptions right now, so a lot of our clients are entering into the space, adopting AI, and creating their own AI tools without a lot of guidance from the courts, in particular around copyright law,” says Sarah Bro, a partner at law firm McDermott Will & Emery. “I’ve been really encouraged to see business and legal directives help mitigate risks or manage relationships around the technology and use, and parties really trying to be proactively thinking about how to address things when we don’t have clear-cut legal guidance on every issue at this point.” 

Why C-Suites and Boards Need to Get Ahead of This Now 

GenAI can lead to four types of IP infringement: copyright, trademark, patent, and trade secrets. Thus far, there’s been more attention paid to the business competitiveness aspect of GenAI than the potential risks of its usage, which means that companies are not managing risks as adeptly as they should. 

Related:Defining an AI Governance Policy

“The C-suite needs to think about how employees are using confidential and proprietary data.  What gives us a competitive advantage?” says Brad Chin, IP partner at the Bracewell law firm. “Are they using it in marketing for branding a new product or process? Are they using generative AI to create reports? Is the accounting department using generative AI to analyze data they might get from a third party?”  

Historically, intellectual property protection has involved non-disclosure agreements (NDAs), and that has not changed. In fact, NDAs should cover GenAI. However, according to Chin, using the company’s data, and perhaps others’ data, in a GenAI tool raises the question of whether the company’s trade secrets are still protected. 

“We don’t have a lot of court precedent on that yet, but that’s one of the considerations courts look at in a company’s management of its trade secrets: what procedures, protocols, practices they put in place, so it’s important for C-suite executives to understand that risk is not only of the information their employees are putting into AI, but also the AI tools that their employees may be using with respect to someone else’s information or data,” says Chin. “Most company NDAs and general corporate agreements don’t have provisions that account for the use of generative AI or AI tools.” 

Related:Preparing for AI-Augmented Software Engineering

Some features of AI development make GenAI a risk from a copyright and confidentiality standpoint. 

“To train machine learning models properly, you need a lot of data. Most savvy AI developers cut their teeth in academic environments, where they weren’t trained to consider copyright or privacy. They were simply provided public datasets to play with,” says Kirk Sigmon, an intellectual property lawyer and partner at the Banner Witcoff law firm, in an email interview. “As a result, AI developers inside and outside the company aren’t being limited in terms of what they can use to train and test models, and they’re very tempted to grab whatever they can to improve their models. This can be dangerous: It means that, perhaps more than other developers they might be tempted to overlook or not even think about copyright or confidentiality issues.” 

Similarly, the art and other visual elements used in generative AI, such as Gemini and DALL-E, may be copyright protected, and logos may be trademark protected.  GenAI could also result in patent-related issues, according to Bracewell’s Chin.  

Related:How AI Drives Results for Data-Mature Organizations

“A third party could get access to information inputted into generative AI, which comes up with five different solutions,” says Chin. “If the company that has the information then files patents on that technology, it could exclude or preclude that original company from getting that part of the market.” 

Boards and C-Suites Need to Prioritize GenAI Discussions 

Boards and C-suites that have not yet had discussions about the potential risks of GenAI need to start now. 

“Employees can use and abuse generative AI even when it is not available to them as an official company tool. It can be really tempting for a junior employee to rely on ChatGPT to help them draft formal-sounding emails, generate creative art for a PowerPoint presentation and the like. Similarly, some employees might find it too tempting to use their phone to query a chatbot regarding questions that would otherwise require intense research,” says Banner Witcoff’s Sigmon. “Since such uses don’t necessarily make themselves obvious, you can’t really figure out if, for example, an employee used generative AI to write an email, much less if they provided confidential information when doing so. This means that companies can be exposed to AI-related risk even when, on an official level, they may not have adopted any AI.” 

Emily Poler, founding partner at Poler Legal, wonders what would happen if the GenAI platform a company uses becomes unavailable. 

“Nobody knows what’s going to happen in the various cases that have been brought against companies offering AI platforms, but one possible scenario is that OpenAI and other companies in the space have to destroy the LLM they’ve created because the LLMs and/or the output from those LLMs amounts to copyright infringement on a massive scale,” says Poler in an email interview. “Relatedly, what happens to your company’s data if the generative AI platform you’re using goes bankrupt? Another company could buy up this data in a bankruptcy proceeding and your company might not have a say.” 

Another point to consider is whether the generative AI platform can use a company’s data to refine its LLMs, and if so, whether there are any protections against the company’s confidential information being leaked to a third party. There’s also the question of how organizations will ensure employees don’t rely on AI-generated hallucinations in their work, she says. 

Time to Update Policies 

Bracewell’s Chin recommends doing an audit before creating or updating a policy so it’s clear how and why employees are using GenAI, for what purpose and what they are trying to achieve.  

“The audit should help you understand the who, what, why, when, and where questions and then putting best practices [in place] -- you can use it, you can’t use it, you can use it with these certain restrictions,” says Chin. “Education is also really important.” 

Jason Raeburn, a partner in the litigation department of law firm Paul Hastings, says the key point is for CIOs and the C-suite to really engage with and understand the specific use cases for GenAI within their particular industry to assess what risks, if any, arise for their organization. 

“As is the case with the use of technology within any large organization, successful implementation involves a careful and specific evaluation of the tech, the context of use, and its wider implications including intellectual property frameworks, regulatory frameworks, trust, ethics and compliance,” says Raeburn in an email interview. “Policies really need to be tailored to the needs of the organization, but at a minimum, they should include a ‘GenAI in the workplace’ policy so there is clarity as to what the employer considers to be appropriate and inappropriate use for business purposes.” 

Zara Watson Young, co-founder and CEO at the Watson & Young IP law firm, says the board, CEO and C-suite should regularly discuss how GenAI affects their IP strategies.  

“These conversations should identify potential gaps in current policies, keep everyone informed about shifts in the legal landscape and ensure that the team understands the nuances of AI’s impact on copyright and trademark laws,” says Watson in an email interview. “Equally important are discussions with counsel, focusing on developing robust IP policies for AI usage, ensuring compliance and implementing enforcement strategies to protect the company’s rights.” 

In the absence of concrete regulations and standards of practice, companies should develop their own policies based on how they use generative AI. According to Poler Legal’s Poler these policies should be split into two types, so they address both sides of the generative AI process: data gathering and training and output generation.  

“Policies for data gathering and training need to be clear on how and what data is used, whether any third-party involvement is part of that process, the vetting and monitoring process for the data, how the data is stored, and how the company is protecting and securing that data,” says Poler. “The biggest concerns are privacy, security and infringement. These policies need to be up to date with all regulations, especially for international usage.” 

Companies using their own datasets and models can better vet, monitor, and control data and models. However, the companies using third-party datasets and models need to do their due diligence on them and ensure transparency, security, legal compliance, and ethical usage, such as removing bias. 

“Policies for output generation should be centered around monitoring. Companies should develop policies that contain how the monitoring is done for privacy and intellectual property concerns,” says Poler. “These policies need to contain instructions and procedures on how outputs are before they are ultimately used with checklists of important criteria to detect confidential information and protect it intellectual property.” 

Banner Witcoff’s Sigmon says companies should establish policies that strike a careful balance between the usefulness of AI enabled tools and the liability risks they pose. For instance, employees should be strongly discouraged from using any external AI tools that have not been fully tested and approved by their employer.  

“Such tools compose both the risk of copyright infringement if, for example, they generate infringing content and a risk of confidential information loss such as if the employee discloses confidential information the AI and that information is stored, used for future training, or the like,” says Sigmon. “In turn, this means that if a company decides to use an AI tool, it should understand that tool deeply: how it operates, what data set were used to train it, who assumes liability if copyright infringement occurs and/or if sensitive data is exfiltrated, and [more].”  

Bottom Line 

The wildfire adoption and use of GenAI has outpaced sound risk management. Organizational leaders need to work cohesively to ensure that GenAI usage is in the company’s best interests and that the potential risks and liabilities are understood and managed accordingly.  

Check to see whether your company’s policies are up to date. If not, the time to start talking internally and with counsel is now. 

About the Author

Lisa Morgan

Freelance Writer

Lisa Morgan is a freelance writer who covers business and IT strategy and emerging technology for InformationWeek. She has contributed articles, reports, and other types of content to many technology, business, and mainstream publications and sites including tech pubs, The Washington Post and The Economist Intelligence Unit. Frequent areas of coverage include AI, analytics, cloud, cybersecurity, mobility, software development, and emerging cultural issues affecting the C-suite.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights