Complying With GDPR in the Age of AI

With the rise of artificial intelligence, there is opportunity and risk. Yet, many of AI’s challenges are mitigated if businesses just leverage the GDPR as a framework.

Nicky Watson , Founder and Chief Architect, Cassie

September 13, 2024

3 Min Read
Cartoon of Weightlifter Weightlifting Big GDPR Letters
Zdenek Sasek via Alamy Stock

Meta’s been making headlines since its inception -- and this year is no exception. From accusations of massive, illegal data collection of EU users to the company suspending the use of generative artificial intelligence (GenAI) in Brazil after the country's data protection authority issued a preliminary ban objecting to its new privacy policy, there’s been quite a lot of negative press surrounding the technology giant.  

When I listen to or read the news, there is often a shrill tone around AI. If I had to pinpoint the reason, I think it’s because of the juxtaposition we’re faced with when considering the already nascent technology.  

On the one hand, AI enables us to imagine a world of miraculous medical and scientific achievement. A world where speech-disabled individuals can talk, and cancer research is catapulted light years into the future. But, on the other hand, we intuitively fear increasingly high-stakes environments, where AI gone wrong has the potential to widen discrimination gaps, create safety issues, and misdiagnose serious illnesses.  

With everyday life hanging so delicately in the balance, where do we even begin to consider how to control this technological innovation with such an enormous impact? For me, it’s the European Union’s General Data Protection Regulation (GDPR).  

Related:What American Enterprises Can Learn From Europe's GDPR Mistakes

One of the reasons why the GDPR took so long to create and for the language to be finalized is because the original authors were quite clever, thinking how to make sure the verbiage holds true as technology continues developing. Oftentimes, there’s a real problem between legislation and technology, because one always and inevitably develops faster than the other. But the genius of the GDPR’s authors was that they considered this possibility. While this has led to some of the legislation’s language being open to interpretation, it also means the regulation is actually widely applicable to AI -- despite the technology only coming into popularity in the last 18 months (whereas GDPR was created six years ago).  

We all know it to be true: AI has grown exponentially -- to the point now where AI writes other AI. But as the technology has scaled at this breakneck pace, human knowledge and understanding have not. So much so, that now hardly anybody at a company knows how the algorithms work because they’ve become so complicated and are based on such a massive amount of information. Nobody can answer: “Well, why was that automated decision made?” 

Herein lies the problem: Businesses leveraging AI and building their own sophisticated chatbots (like Meta) can say they trained their AI, and now the technology can make an automated decision. But a fundamental piece of GDPR is a user’s right to challenge an automated decision because the information an answer is being based on could be wrong. And if companies can’t properly explain how that decision was made, they cannot fulfill their legal obligation in the GDPR.  

Related:The US Regulatory Landscape: A Patchwork of Concerns

The reality that businesses the world over face is this: The combination of operational pressures to move fast, with the uncertainties inherent to any new entry into an evolving field, has the potential to increase the internal friction between privacy professionals who caution care and business drivers wanting quick AI wins.  

Meta has a real opportunity at this juncture to lead by example and prioritize their customers. By working toward better compliance with the GDPR in the age of AI, they can change the narrative and set the tone for big tech by prioritizing consumer privacy in a way that largely hasn’t been done so far. I implore Meta -- and every other multinational corporation: Prioritize your customers.   

And on that note, I leave you with this: Compliance is good –- ethics are better. Do it for yourself (prioritizing your customers is proven to create stronger relationships, increased brand loyalty and higher sales). But on a moral level, do it for the millions of people placing their trust in you.  

Related:EU AI Act Passes: How CIOs Can Prepare

Read more about:

Regulation

About the Author

Nicky Watson

Founder and Chief Architect, Cassie

Nicky Watson is the founder and chief architect of Cassie, a consent and preference management platform. After a career spent across the disciplines of software design, data mining and digital marketing, and having pioneered the use of several marketing technologies for multiple enterprise clients, Nicky built and brought to market Cassie. She retains direction of all development work for the product, offering expert guidance that ensures Cassie remains ahead of technological, business and legislative challenges their clients may face with navigating data privacy and consumer preferences. 

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights