AI Is Deepening the Digital Divide
A growing digital divide is being exacerbated by AI, excluding billions from the benefits of technological progress. Here are six ways to address the crisis.
UN Secretary-General António Guterres says this about AI: “[It] must benefit everyone, including the one third of humanity who are still offline. Human rights, transparency and accountability must light the way.”
As a generational event is unfolding, a digital divide between the Global North and South (as imperfect as these designations may be) is growing, and dire consequences are revealing themselves. A pervasive issue has emerged: the widespread absence of participation, voice, skills, literacy, and access is barring most of the world’s population from reaping the rewards of technological progress.
This historic pattern holds true for most technological advancements; however, AI introduces a unique dimension to this problem.
The billions of people in developing regions who have historically not had a voice in discussions around technology face even further alienation as AI innovation speeds up. People in these regions often serve as outsourced, cheap labor tasked with roles such as labeling data and training models for the benefit of developed economies and their consumers. What’s more, the environmental impacts from mineral extraction, energy use, and water consumption from these models tend to hit these regions the hardest.
Those with the power and resources to experiment with and apply AI -- be they corporations, governments, academic institutions, or nonprofits -- must invest in closing this gap or risk an insurmountable chasm.
Here’s how we all can step up:
Challenge the status quo. A necessary first step is to agree on fundamental standards for AI, such as safety, transparency, accountability, and oversight. Organizations such as UNESCO and the International Organization for Standardization have made progress here. Other organizations working with AI should develop and adopt their own responsible AI frameworks, with added consideration for their industry, regulatory environment, risk profile and corporate values.
Empower the underrepresented. Regions and communities within the Global South have historically been left out of technology discussions, so early AI and digital literacy education is important. However, education from a western perspective won’t always be appropriate for parts of the world that hold different sets of values and priorities. And any program must be put in the context of the impacted population rather than wrapped in the stereotypical North American or Western European perspective. We’ve seen some locally targeted nonprofits focusing on digital literacy for minorities, and we hope to see more nonprofits with international reach and resources support digital literacy initiatives globally.
Bring all stakeholders to the AI discussion. The technology industry needs to understand what technologies people in developing regions find most desirable, rather than imparting on them what we think they need. We’ll miss the mark every time by not giving affected stakeholders power and representation. That means not infringing on their authority to make decisions in pursuit of outcomes they choose and build solutions with and for their own communities.
Train the data for the region. We should continually ask how the underlying data in AI systems was collected and extracted and how it’s being used. How are the models being trained, and for what purposes? If AI systems are biased in favor of a particular population -- for example, the Global North --rather than the people using or affected by those systems, then it’s likely that the systems will be advancing values and priorities that are misaligned with local needs.
Have candid conversations about impact. AI strategy should start with purpose and intention, with questions about what impact the organization wants to have on the world, why that matters, and how technology can advance the cause. The foundation of digital ethics is to consider the impact of technology through its entire life cycle, from conception to implementation. The goal should be to translate the organization’s values and intended impacts into the technology, which means working to mitigate potential harm to people, society, and the environment and to steer the impacts of technology towards ethically positive outcomes.
Expand our definition of AI professionals. Finally, we’ll have to think differently about what it means to be an AI professional. There’s currently a heavy emphasis on technical skills, which are certainly important. However, focusing on data science and software engineering leaves out an important segment of professionals who have expertise in areas like social services, social justice, and education. Disciplines such as philosophy, ethics, sociology, and law should also play a critical role in understanding and steering how AI works in the context of social institutions as well as in our personal and professional lives.
People around the world face vastly different challenges with respect to laws, political power, environmental challenges, physical health, and financial opportunity. The clear and accelerating opportunities promised by AI could serve to benefit everyone, regardless of where they exist in the world. But on our current trajectory, that is unlikely to be the case. The current concentration of power and the desire to continue amassing wealth and power regardless of impact is already steering AI in a troubling direction. The digital divide between the Global North and Global South is accelerating. Those with power and decision-making authority must address this divide with resolve, compassion, and innovation.
About the Authors
You May Also Like
2024 InformationWeek US IT Salary Report
May 29, 20242022 State of ITOps and SecOps
Jun 21, 2022