As some thought leaders advocate for the recognition of access to AI as a human right, the question comes up: How could we achieve that?

Guest Commentary, Guest Commentary

March 8, 2019

6 Min Read

Life, liberty, the pursuit of happiness and access to artificial intelligence. Might this be the updated list of basic unalienable rights as we march into the AI-powered world of tomorrow?

Theoretical neuroscientist, technologist, entrepreneur and AI specialist Vivienne Ming said last December we need to be thinking of AI as a human right. More recently, Salesforce CEO Marc Benioff made a similar statement at the World Economic Forum in Davos, saying that AI is becoming a new human right and that everyone will need access to it. "Those who have the artificial intelligence will be smarter, will be healthier, will be richer, and of course, you've seen their warfare will be significantly more advanced," he said.

This begs the question of what exactly constitutes a human right. According to the United Nations, human rights are rights inherent to all human beings, regardless of nationality, place of residence, sex, national or ethnic origin, color, religion, language, or any other status. Examples of rights and freedoms which are often thought of as human rights include civil and political rights, such as the right to life and liberty, freedom of expression, and equality before the law.

What’s at stake?

To state that anything is a human right is one thing but making it so is quite another. We are a long way from ensuring this right to AI. In fact, AI can threaten existing rights. As a recent Ericsson company blog notes, the misuse of AI poses numerous challenges such as intruding on right to privacy, curtailing the freedom of expression and thought, unfair treatment and unequal opportunities, discrimination due to biases, uneven distribution of benefit and arbitrary interference in an individual's life.

Microsoft Research’s Kate Crawford warned in a Guardian article, it is possible AI will blindly or purposefully be used to drive inequality, divide people and communities, suppress dissent, and deny human rights. Or more simply to reinforce the profits of relatively few corporations.

Even without such dramatic negative impacts, those without AI access will be "weaker and poorer, less educated, and sicker,” Benioff predicted. Ming said that almost every technology increases inequality because those who can afford it tend to be those who need it the least: “99.999% of the world’s population has no say in how any of this is used.”

As New York University professor Amy Webb noted in a recent Business Insider article, there are nine companies that control the future of AI: Google, Microsoft, Amazon, Facebook, IBM, and Apple as well as Chinese Internet leaders Baidu, Alibaba, and Tencent. This control is possible because, today and for the foreseeable future, AI – deep learning in particular – requires both vast amounts of data and computing power to improve its accuracy and overall effectiveness. These companies are uniquely positioned to dominate on both dimensions, at least for commercial uses.

All the American companies are publicly traded, meaning they must answer the concerns of Wall Street, which is likely more focused on profits than human rights. And as Webb notes, the Chinese Internet leaders need to answer to the Chinese government. As depicted by Amnesty International and others, China does not have an exemplary record for promoting human rights.

The issues are not purely commercial, as AI is increasingly being used to advance warfare – what the U.S. defense department is calling “algorithmic warfare.” Warfare doesn’t automatically mean shooting bullets as this can equally apply to facial recognition used to suppress dissent or simple freedom of expression or use of bots to spread disinformation. Already, China is using mass surveillance – including facial recognition software – to crack down on dissent, particularly in its ethnic Uighur Muslim region.

Is treating AI as a human right only a sci-fi fantasy?

Yoshua Bengio, a Canadian computer scientist considered one of the founders of deep learning technology, recently expressed concerns that the technology he helped create was being used to control people’s behavior and influence their minds. From trends evident now, it appears we are very far from treating AI as a human right.

If AI was somehow realized as a human right, it could be more readily harnessed to solve difficult endemic problems, generate economic growth with widely shared prosperity, and lead to the broader fulfilment for a multitude of human hopes. While this is challenging at present, there are some positive indicators. For example, Microsoft recently urged lawmakers to regulate the use of AI-powered facial recognition software to prevent abuse. In a thoughtful blog post, the company makes the point that “advanced technology no longer stands apart from society; it is becoming deeply infused in our personal and professional lives,” and called for government regulation aided by bipartisan, expert commissions to prevent abuse. Similarly, Google has released a set of guiding ethical principles for AI applications. 

Significantly, employees are calling on their companies to act ethically on the application of AI. A PC Magazine commentary speaks to the AI industry’s year of ethical reckoning and highlights several instances of employees holding their companies to account. In one case, in the face of substantial employee opposition, Google backed away from a Department of Defense project after it expires in 2019. Google CEO Sundar Pichai then explicitly stated that his company will not work on technologies that violate human rights norms.

Public pressure and the new discussion about AI as a human right might already be having an impact. For example, Cisco just published a series of Human Rights Position Statements for several technologies. About AI, the company says “rapid advancements in AI technology require close attention to issues of safety, trustworthiness, transparency, fairness, ethics, and equity. These issues can manifest as risks to fundamental human rights, particularly when the consequences of such new technology can’t all be anticipated.” The company has committed to positive steps, including support for re-skilling employees working at suppliers in their supply chain and improved dialogue among stakeholders “to improve our collective understanding of the societal and human rights impact of AI and machine learning.”

The Business Insider article states that it can be hard to distinguish sci-fi fantasies from projections on where artificial intelligence is headed. Achieving the positive scenarios for AI, let alone anointing it as a human right, will require our noblest angels and the collaboration of broad majorities among governments, businesses and the general population. It may be that employee pressure and citizen action that will offer the best avenue of ensuring ethical application of AI and the best hope for treating the technology as a human right.


Gary Grossman is Senior Vice President and Technology Practice Lead, Edelman AI Center of Expertise.

About the Author(s)

Guest Commentary

Guest Commentary

The InformationWeek community brings together IT practitioners and industry experts with IT advice, education, and opinions. We strive to highlight technology executives and subject matter experts and use their knowledge and experiences to help our audience of IT professionals in a meaningful way. We publish Guest Commentaries from IT practitioners, industry analysts, technology evangelists, and researchers in the field. We are focusing on four main topics: cloud computing; DevOps; data and analytics; and IT leadership and career development. We aim to offer objective, practical advice to our audience on those topics from people who have deep experience in these topics and know the ropes. Guest Commentaries must be vendor neutral. We don't publish articles that promote the writer's company or product.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like

More Insights