How to Innovate in a Privacy-Protective Way

Here’s some guidance for businesses looking to prioritize responsible data stewardship while navigating the rapidly evolving digital landscape.

Nubiaa Shabaka, Vice President, Chief Cybersecurity Legal and Privacy Officer

March 5, 2024

3 Min Read
Keyhole with light grow bring for opening unlock power idea creative or data privacy concept
Quality Stock via Alamy Stock

Artificial intelligence is proving to be a transformational technology, and its speed and efficiency is already transforming the way we work and live. While there’s still so much to learn about how and where humans will want to integrate this technology into their lives, it’s clear advancements in AI can bring incentives for businesses and great advancements for society at large if customers and other stakeholders trust that data is being used responsibly.  

As chief cybersecurity legal and privacy officer at Adobe, I believe with the right combination of technical and legal expertise, a global view on policy, matched with bold aspirations, businesses can think big and take on new challenges in the evolving AI and related privacy landscape. 

That said, I understand the pressure businesses are under to innovate faster than ever, and it can be challenging to strike the right balance between technological innovation and data privacy when moving quickly.  

Striking the Balance Between Innovation and Data Privacy 

When making choices on how we develop, use, and regulate the latest technologies, privacy and security are critical considerations. Lessons learned from privacy and security are completely transferable to AI.   

Leveraging a privacy-by-design approach throughout the AI lifecycle (from development to use), while investing in emerging privacy-enhancing technologies, like federated learning and differential privacy, can be done by shifting the ways we think about data privacy within existing workflows.  

Related:Executive Order to Target Data Sold to US Foes

To innovate in a privacy-conscious manner, organizations should leverage current processes for evaluating privacy (such as privacy impact assessments and data protection impact assessments) and reinforce them to account for new and increased risks posed by AI.  Practicing data minimization and purpose limitation can help maintain quality of data, while respecting consumer rights regarding their personal data. 

For businesses working with AI vendors, it’s paramount to conduct due diligence on the privacy and security practices of those vendors. Understanding clearly if the vendor is acting as a service provider/processor or business/controller and implementing use limitations accordingly is key. 

Businesses should have people with expertise and train their staff on potential harms associated with the use of AI. It is important to develop cross-disciplinary and diverse teams to develop and review AI use cases. For example, at our company, AI-powered features with the highest potential ethical impact are reviewed by a diverse cross-functional AI Ethics Review Board, on which I sit. Diversity of personal and professional backgrounds and experiences is essential to identifying potential issues from a variety of different perspectives that a less diverse team might not see.  

Related:Causal AI: AI Confesses Why It Did What It Did

Privacy is never one-and-done, companies should test AI systems regularly, perform assessments on these systems on an ongoing basis, and keep abreast of new privacy-enhancing technologies, regulations, and best practices, and continuously evaluate and update privacy, security and related AI practices accordingly.  

There are several technologies that will further enhance privacy for businesses. Some of these include: 

  • Federated learning, which is an approach to machine learning that allows data to remain on local devices while still benefiting from collective intelligence. The models learn from decentralized data across devices and only the learned patterns are shared, offering a boost to privacy.  

  • The concept of differential privacy, as it relates to AI technology, introduces “statistical noise” or “randomness” to data, allowing overall trends to be analyzed without compromising individual data points. 

  • Homomorphic encryption, a form of encryption that enables computations to be performed on encrypted data without decrypting it first. Advances in this area could enable AI models to learn from encrypted data, making it a powerful tool for privacy-preserving data analysis. 

Related:How to Submit a Column to InformationWeek

AI innovation should be approached with thoughtfulness and grounded in the principles of accountability, responsibility, and transparency (ART). I know from personal experience that the ART of innovation can be achieved in a privacy-conscious manner. Doing so will allow organizations to not only take advantage of transformational technology, but also to create and retain trust with customers and employees, ultimately giving those who do it well a competitive advantage. 


About the Author(s)

 Nubiaa Shabaka

Vice President, Chief Cybersecurity Legal and Privacy Officer, Adobe

Nubiaa Shabaka oversees Adobe’s global data protection and privacy programs and legal aspects of Adobe’s global cybersecurity, information security and information technology programs, on an enterprise-wide scale. Ms. Shabaka also provides strategy for Adobe’s Data Governance program and sits on AI leadership Adobe groups. 

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like

More Insights