One small step for tech, one giant leap for privacy? It is up to our society to create clear boundaries that will enhance the positive and control the negative.

Guest Commentary, Guest Commentary

January 20, 2020

6 Min Read
Image: metamorworks - stockadobe.com

Last May, San Francisco became the first major city in the U.S. to ban facial recognition software by the government, police and public sector agencies. Since then, we’ve seen activists scan lawmaker’s faces in protest in Washington D.C. to get the attention of Congress and other cities taking action such as Portland, Oregon, which now proposes a ban that would extend to private business -- making it the strictest regulation in the country.

Yet, facial recognition innovation is all around us. Casinos have been using facial recognition tools for several years in order to discover high-rollers and card counters. On a friendlier note, Facebook’s DeepFace enables better image-based mechanisms, like tagging, and can identify people with a near-human accuracy level, based on no less than 120 million parameters. Google Photos also demonstrates impressive results, and Apple’s newest iPhones (X or newer) feature Face ID, which despite needing some improvements, is still considered a strong implementation of this technology.

As the world marches forward, it seems that facial recognition innovation is met with a more suspicious attitude by users due to a fear of mass survellience and a Big-Brother-like dystopia coming to fruition. But in the burgeoning fight for privacy, why the focus on facial recognition technology specifically?

Our face plays a crucial part in our sense of self, and perhaps that’s why we are more alarmed at the idea of technology gaining access and using our facial features for unknown purposes or in a poorly secured manner. But the reality is that it’s no different than any other form of personal information we expose to the world. For example, do we consider our face more important than our bank account or social security number? Probably not.

At the end of the day, responsibly handled face recognition technologies are much safer than an unsecured online registration form. So how can we embrace a healthy attitude towards one of the more exciting innovations of our time? Here are a few thoughts on the issue.

Always a critic: How Black Mirror and harsh criticism can stifle innovation

Today, the famously dark show ‘Black Mirror’ highlights facial recognition technology in numerous episodes and words like “creepy” and “scary” are often used to describe even the most useful of features involving this advanced technology. This part of our life is so personal that it’s impossible to get these ideas across without making potential users feel uncomfortable.  For example, the AI-generated faces website “This Person Does Not Exist” demonstrates this well from the chilling effect it has on those who see the realisitic yet completely fake technology-generated faces on the site.

Every new technology has a market to educate and its share of critics to face (no pun intended). But It seems as though facial recognition tools are often met with harsher criticism than others due to the very personal nature of the face. For example, DeepFace was dubbed as “creepy”, both Google andApplehad to fix bugs over racist algorithms, and Amazon falsely identified members of Congress as criminals. It seems that when our face is involved, the potential risk of leaked or manipulated data becomes clearer and more frightening to users.

While these criticisms are important to address, let’s not lose site of the potential impact facial recognition can have. For example, facial recognition is being used to diagnose patients with rare genetic diseases such as DiGeorge syndrome; it’s being used to enable the blind to better communicate by detecting when people are smiling; not to mention it’s use for increasing security and convenience at airports, in retail, schools, ATMS and so on. It’s important to familiarize ourselvecs with the many positive influences of this innovative new frontier, and perhaps the less afraid and more excited we’ll become.

Saving face: The dangers are real -- so now what?

While the facial recognition debate may be negatively skewed, it’s not without good reason. DeepFake technologies post a long line of legal, political, ethical and social threats. Just a few months ago, New York legislators updated local privacy laws to prohibit the “use of a digital replica to create sexually explicit material in an expressive audiovisual work.”

Additionally, even the world’s biggest tech companies cannot promise that our face-related data will be 100% secure. While tech giants such as Google have made attempts to fight the spread of DeepFake videos, the industry as a whole needs to put more resources toward these efforts and step up the security game around facial recognition technology solutions. We can be sure, however, that as facial recognition technology becomes an inseparable part of our daily routine, more advanced security solutions will enter the market to address this critical need.

While facial recognition has many benefits, it also has the destructive power to change the game for the worse. It is up to our society to create clear boundaries that will enhance the positive and control the negative.

Thinking bigger: Why the facial recognition debate matters

The dangers of facial recognition discussed here goes way beyond this one technology. It’s true for many other pieces of personal information that may fly below the radar and have been manipulated, traded and used without consent for years. If facial recognition is the technology that can wake up the world to these considerations, we take this as a positive for the wider debate on online privacy. The recent changes in public attitude, discussion and legislation have been a long time coming and the emergence of any technology that can shift things for the better is a sign of progress.

Facial recognition technology can be a powerful tool when used responsibly. Yet with so much recent backlash, it’s possible that it’s full potential may never be realized. As a society, we can’t allow fear to hinder innovation. Instead, we need to embrace this opportunity and work together to address the very real concerns with increased measures of security and regulations that still allow the technology to be put to use in the best way possible. At the same time, we can harness the discussion around the topic to invoke a healthy debate around other violations of our privacy, that frequently take place elsewhere. After all, we often tend to ignore real dangers that are staring us directly in the face all along. 

Gal-Ringel-Mine.jpg

Gal Ringel is the co-founder and CEO of Mine, a company focused on empowering Internet users to know who holds their data and decide how it’s used. Prior to founding Mine, Ringel was a venture capital investor for Verizon Ventures and Nielsen, where he has deployed over $50M in $20+ startups. He is also veteran of the Israel Defense Force’s Elite Cyber Unit 8200. 

About the Author(s)

Guest Commentary

Guest Commentary

The InformationWeek community brings together IT practitioners and industry experts with IT advice, education, and opinions. We strive to highlight technology executives and subject matter experts and use their knowledge and experiences to help our audience of IT professionals in a meaningful way. We publish Guest Commentaries from IT practitioners, industry analysts, technology evangelists, and researchers in the field. We are focusing on four main topics: cloud computing; DevOps; data and analytics; and IT leadership and career development. We aim to offer objective, practical advice to our audience on those topics from people who have deep experience in these topics and know the ropes. Guest Commentaries must be vendor neutral. We don't publish articles that promote the writer's company or product.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights