AI Is Deepening the Digital Divide

A growing digital divide is being exacerbated by AI, excluding billions from the benefits of technological progress. Here are six ways to address the crisis.

4 Min Read
abstract of digital light/speed
spainter vfx via Adobe Stock

UN Secretary-General António Guterres says this about AI: “[It] must benefit everyone, including the one third of humanity who are still offline. Human rights, transparency and accountability must light the way.”

As a generational event is unfolding, a digital divide between the Global North and South (as imperfect as these designations may be) is growing, and dire consequences are revealing themselves. A pervasive issue has emerged: the widespread absence of participation, voice, skills, literacy, and access is barring most of the world’s population from reaping the rewards of technological progress.

This historic pattern holds true for most technological advancements; however, AI introduces a unique dimension to this problem.

The billions of people in developing regions who have historically not had a voice in discussions around technology face even further alienation as AI innovation speeds up. People in these regions often serve as outsourced, cheap labor tasked with roles such as labeling data and training models for the benefit of developed economies and their consumers. What’s more, the environmental impacts from mineral extraction, energy use, and water consumption from these models tend to hit these regions the hardest.

Those with the power and resources to experiment with and apply AI -- be they corporations, governments, academic institutions, or nonprofits -- must invest in closing this gap or risk an insurmountable chasm.

Here’s how we all can step up:

Challenge the status quo. A necessary first step is to agree on fundamental standards for AI, such as safety, transparency, accountability, and oversight. Organizations such as UNESCO and the International Organization for Standardization have made progress here. Other organizations working with AI should develop and adopt their own responsible AI frameworks, with added consideration for their industry, regulatory environment, risk profile and corporate values.

Empower the underrepresented. Regions and communities within the Global South have historically been left out of technology discussions, so early AI and digital literacy education is important. However, education from a western perspective won’t always be appropriate for parts of the world that hold different sets of values and priorities. And any program must be put in the context of the impacted population rather than wrapped in the stereotypical North American or Western European perspective. We’ve seen some locally targeted nonprofits focusing on digital literacy for minorities, and we hope to see more nonprofits with international reach and resources support digital literacy initiatives globally.

Bring all stakeholders to the AI discussion. The technology industry needs to understand what technologies people in developing regions find most desirable, rather than imparting on them what we think they need. We’ll miss the mark every time by not giving affected stakeholders power and representation. That means not infringing on their authority to make decisions in pursuit of outcomes they choose and build solutions with and for their own communities.

Train the data for the region. We should continually ask how the underlying data in AI systems was collected and extracted and how it’s being used. How are the models being trained, and for what purposes? If AI systems are biased in favor of a particular population -- for example, the Global North --rather than the people using or affected by those systems, then it’s likely that the systems will be advancing values and priorities that are misaligned with local needs.

Have candid conversations about impact. AI strategy should start with purpose and intention, with questions about what impact the organization wants to have on the world, why that matters, and how technology can advance the cause. The foundation of digital ethics is to consider the impact of technology through its entire life cycle, from conception to implementation. The goal should be to translate the organization’s values and intended impacts into the technology, which means working to mitigate potential harm to people, society, and the environment and to steer the impacts of technology towards ethically positive outcomes.

Expand our definition of AI professionals. Finally, we’ll have to think differently about what it means to be an AI professional. There’s currently a heavy emphasis on technical skills, which are certainly important. However, focusing on data science and software engineering leaves out an important segment of professionals who have expertise in areas like social services, social justice, and education. Disciplines such as philosophy, ethics, sociology, and law should also play a critical role in understanding and steering how AI works in the context of social institutions as well as in our personal and professional lives.

People around the world face vastly different challenges with respect to laws, political power, environmental challenges, physical health, and financial opportunity. The clear and accelerating opportunities promised by AI could serve to benefit everyone, regardless of where they exist in the world. But on our current trajectory, that is unlikely to be the case. The current concentration of power and the desire to continue amassing wealth and power regardless of impact is already steering AI in a troubling direction. The digital divide between the Global North and Global South is accelerating. Those with power and decision-making authority must address this divide with resolve, compassion, and innovation.

About the Authors

Chris McClean

Global Lead for Digital Ethics, Avanade

As global lead for digital ethics at Avanade, Chris McClean drives the company’s internal responsible tech and responsible AI efforts, and he leads the company’s digital ethics advisory practice. Prior to Avanade, Chris was a research director and industry analyst at Forrester Research, leading the company’s analysis and advisory for risk management, compliance, corporate values, and ethics. Chris earned his Masters in Business Ethics and Compliance in 2010, and he’s currently a PhD candidate working on applied ethics with a focus on risk and trust relationships.

Almin Surani

Global Nonprofit Digital Transformation Lead , Avanade

Almin Surani is the Global Nonprofit Digital Transformation Lead for Avanade. In this role, he works with nonprofits around the world to ensure they have an effective and efficient digital transformation which will enable them to have a greater impact on their constituents. Almin has over 15 years of experience in the nonprofit sector including 10 years as CIO of the Canadian Red Cross. He also has over 20 years of experience in technology ranging from enterprise software to consumer software to consulting in both private and nonprofit organizations.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights