How CEOs and IT Leaders Can Take the Wheel on Responsible AI Adoption

Leaders expect AI to reshape business, but readiness varies. Here’s why it's crucial for CEOs, CIOs, and CTOs to develop responsible AI safety and innovation strategies now.

Rebecca Finlay, CEO, Partnership on AI

April 15, 2024

5 Min Read
Part of modern steering wheel isolated on black background.
bigtunaonline via Alamy Stock

A recent Deloitte report revealed that 79% of corporate leaders anticipate substantial transformation due to generative AI over the next three years. At the same time, IT and corporate leaders regularly report that their organizations are not yet ready to fully implement AI into their workflows, let alone deploy generative AI to advance strategic transformation. Regardless of the pacing challenges, clearly there is a mix of excitement and uncertainty for many corporate leaders who require AI to drive innovation and productivity responsibly today and tomorrow. 

Perhaps, it’s because we as consumers have experienced the power of AI. From quick searches on voice-activated assistants like Siri or Alexa, to locating a meetup with friends at a hidden new bistro using Google Maps, AI can deliver results and has been a staple in our daily lives for quite some time.  

AI for business is quite another thing. Concern over the pace of adoption and governance has yet to be quelled (is it too quick or too slow?), and the global AI research community is grappling with a mutually agreed upon set of standards for AI safety. Progress is being made, especially when it comes to policy, but there is no black and white answer. 

Rather than sit idly by, now is the time for CEOs, CTOs, CIOs and others to begin establishing their own comprehensive plans for AI safety. In partnership with IT, legal and other teams, progressive leaning stakeholders are taking matters into their own hands, defining ways to manage AI risk while meeting the needs of their workforce. 

Related:7 Top IT Challenges in 2024

Each organization will be different and there are no right or wrong answers, but there are a few basic principles leaders can begin doing today to get their AI safety roadmap in place. 

Know the Partners in Your Pipeline  

A good start is mapping out an AI ecosystem outlining all partners involved, from hardware suppliers to cloud, data and model providers, application developers, and ultimately consumers. This is key to understanding where more senior level intervention and oversight may be required. 

Build your Responsible AI Leadership

Appoint a senior-level internal responsible AI champion, akin to a chief privacy officer or chief data analytics officer to oversee your AI. Create an internal, company-wide group to work with your AI champion to manage key org-wide deliverables and ensure coordination across departmental teams. Consult with workers who may be impacted by AI systems to ensure their needs are met. Test, experiment, and iterate to get started.

Related:Overcoming AI’s 5 Biggest Roadblocks

Involve the Board of Directors  

Leaders have the authority to enforce responsible innovation practices, and it’s imperative they maintain frequent dialog with the board around technology strategies like AI. Developing a clearly communicated and consistently updated AI safety roadmap agreed upon by all stakeholders, with the necessary endorsement and backing from the board, is paramount to instilling confidence in any organization's approach to AI safety and accountability. 

Safeguard Information Ecosystems  

In the middle of a global election year, the potential for disruptive, high quality deep fakes has never been greater. Leaders have a substantial role to play in helping to ensure the safe and responsible use of information systems, from deploying clear technical standards, proactively disclosing the use of AI, and quickly and clearly responding to malevolent acts. 

Document, Document, Document  

Effective documentation is non-negotiable. By taking proactive steps throughout the entire lifecycle of AI systems, starting with ensuring data security and conducting research before deployment, to continuously monitoring post-deployment performance and promptly reporting any incidents that arise, CEOs and IT leaders can not only enhance organizational transparency, but also ensure adherence to best practices in AI governance and risk management. 

Related:Go Big or Go Home: The CDO/CDAO’s Guide to AI Leadership

Get Help: Don’t Start from Scratch  

With more and more evidence on the societal risks and harms associated with AI, CEOs and IT leaders have their work cut out for them to stay up to date on everything. Non-profit organizations like Partnership on AI have built communities of knowledge and action that can help leaders assess their organization’s risk tolerance and meet today’s evolving challenges.  

A good resource on AI safety, PAI’s Guidance for Safe Foundation Model Deployment, offers a thorough framework for scaling oversight and adopting a holistic approach to safety, encompassing issues like bias, excessive reliance on AI systems, privacy concerns, and fair treatment of workers. 
On the fast-paced highway of AI, business leaders must proceed with caution to navigate the road safely. This starts with adopting a defensive driving strategy focused on responsibly accelerating AI endeavors and exploring new markets. Just as someone driving a car must anticipate curves in the road, CEOs and IT leaders must expect challenges, manage mistakes proactively, and respond adeptly to evolving conditions. 

If 2023 was the wakeup call for awareness regarding AI's societal risks, then let 2024 be the call to action, compelling businesses to buckle up, set high expectations, and accelerate efforts towards responsible and innovative AI deployment, starting with a clear and regularly updated AI safety roadmap. 

About the Author(s)

Rebecca Finlay

CEO, Partnership on AI

As CEO of Partnership on AI, Rebecca Finlay brings together an international community of over 100 partners so that developments in AI benefit everyone, everywhere. With an influential career at the intersection of technology and society, Rebecca has held leadership roles in civil society organizations, research companies and industry. Prior to PAI, Rebecca was

Vice President of Engagement and Public Policy at the Canadian Institute for Advanced Research (CIFAR),

where she founded one of the first international, multistakeholder initiatives on the impact of AI in society. She holds degrees from McGill and the University of Cambridge.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights