No Fear, AI is Here: How to Harness AI for Social Good
As tech leaders, we have a choice: Embrace AI to solve critical problems or remain fearful of its capacity to divide us. It’s our duty to champion the former.
Looking back on the past decade, I’m blown away by the number of positive advancements the world has achieved through technology. We’ve developed COVID-19 vaccines in record time, probed deeper into the solar system, advanced fuel efficient technologies, and built tools to aid in earlier disease detection.
The one thing these breakthroughs all have in common? AI.
Long before ChatGPT debuted in 2022 and the meteoric rise of generative AI, scientists and technologists were quietly leveraging AI to move faster and make bigger leaps in scientific progress. But over the past year, conversations surrounding AI have become increasingly controversial. According to the 2023 MITRE Report, only 39% of US adults believe current AI technology is safe and secure. While the reservation in trusting AI isn’t surprising, there has been plenty of airtime devoted to the myriad ways AI can go wrong -- and has already gone wrong. These are significant issues, and we should not ignore them.
However, my biggest concern is that these high-profile issues mean we’ll oversee or even completely abandon AI’s power and potential to do good. The world faces an alarming number of critical unsolved problems, from climate change to racial inequity to entrenched poverty. Solving these issues before it’s too late requires collective action. And technology -- especially AI -- can and should play a central role in coordinating our efforts and driving progress.
So, as tech leaders, what’s our next move?
We must proactively think about how our organizations can responsibly leverage AI for good. Our role is to offer our teams the support and guidance required to harness AI’s full power in ways big and small to inspire positive change, ensuring fear doesn’t override optimism. While AI has an undeniable advantage when it comes to its ability to outperform, it cannot replace the power of human creativity, perspectives, and deep insight.
The Next Generation Needs Your Support
It won’t be CTOs like me who come up with the next great idea for solving the world’s most critical problems.
It’s going to be the people on my team -- and your team- who are in the trenches writing the code and building new solutions. And they’re going to use AI to get there. It can speed up the trial-and-error phase of every project -- generating code quicker, debugging faster, automating documentation, suggesting alternatives.
These advancements mean we can arrive at meaningful solutions faster, but only if we aren’t scared to dive in. The best step we can take as leaders is to encourage the healthy exploration, experimentation, and critical thinking necessary to solve problems using today’s best technologies.
I want my team to learn by doing and look for strategic ways to embrace this technology. To iterate until they arrive at their desired outcomes. That type of working environment requires building teams willing to take risks, with the grit and resilience to take failures in stride and adapt to changes in technology and ways of working.
However, our job as tech leaders is not only to motivate capable teams, but to also provide them with frameworks for finding new ways of responsibly leveraging technology for good. To do so, we’ll need to stay grounded in the fundamentals of science -- facts, data, and measurement.
I constantly push my product and technology teams to measure the value or incremental usage of their developments. For example, if my team is working on a platform to support volunteering, that means asking questions like:
Are we attracting more volunteers?
Is there alignment between the volunteers’ skills/interests and the needs of the organizations they serve?
Are the volunteers using our platform more satisfied with the experience?
Setting goals and deciding how to measure outcomes offers teams the guardrails and grounding they need to produce tangible results. Every time we make a change or add something to our platform, we should be able to track the impact on user experience. However, aligning on measurement tactics is difficult, and many companies simply skip this step and dive into experimentation. But without clear goals and key performance indicators, it’s easy to veer off course toward flawed outcomes.
This example of building a volunteering platform may sound small in the grand scheme of things, but it’s the same process we’ll need to follow as we ramp up our use of AI to solve the world’s biggest challenges. The key is to stay grounded in responsible frameworks that keep us honest about the real impact of our work.