The sheer scope of AI’s impact on society does require some regulation. It will be essential to ensure that AI regulation is fair and ethical while not impeding innovation.

Guest Commentary, Guest Commentary

April 27, 2020

4 Min Read
Image: metamorworks - stock.adobe.com

There are heated debates across business and government sectors about the profound impact of artificial intelligence on the world. Several of these AI use cases have significant societal impacts that require regulatory shifts to properly address their fair and ethical use. These regulations need to be infused into business environments particularly when the decisions an AI system makes has a direct impact on human life or our personal privacy. Examples range from appropriate use of facial recognition technology, how autonomous vehicle algorithms make probabilistic choices to minimize fatalities in a car accident, all the way to AI being used for targeting and lethality for weapons systems.

Currently, federal and international governmental bodies, as well as industry and consumer organizations, are all considering various levels of regulatory oversight for AI use-cases. However, it will be very difficult to institute national standards, and more so with global standards, on the fair and ethical use of AI-based systems. Current policy perspectives range from the White House’s recent guidance focused on a case-by-case use of AI regulation to a more comprehensive approach being advocated by The Brookings Institute in their just-released report “AI needs more regulation, not less.”

My perspective is this emerging policy discussion must also consider the enormous societal benefits to emerging AI use-cases. We need to ensure AI innovations can thrive as organizations ethically harness the appropriate use-cases that make us societally more productive and safer. For instance, in the industrial sector, AI-based deep learning is being applied to computer vision enabling the creation of “digital twins” of physical assets. These deep-learning models are designed to generate better insights into the physical world, such as post-storm hail detection of roofs, corrosion detection upon inspecting energy pipelines, and safety conditions on industrial work sites. Those are several examples of AI innovations that shouldn’t be under the same regulatory framework as needed for algorithms being used for facial recognition or autonomous vehicle operations.

This is why I’m advocating for AI regulation to be pursued and deployed on a specific case-by-case basis. The AI systems used for radiological analysis and health outcomes need regulation -- or at the very least medical practitioner oversight. The AI algorithms that calculate how an autonomous vehicle minimizes the outcome in an impending car accident should also be similarly regulated. In that example, deep ethical questions remain on how AI chooses possible outcomes between the passengers in the vehicle and pedestrians on the street. These are the big questions that respective regulatory agencies need to be engaged in early and often as profound AI use-cases continue to come to market.

Brave new world

As we accelerate faster into this brave new world, it is essential that the leaders of the various regulatory agencies have a strong conceptual understanding of both the AI methods, as well as the underlying ethical and societal implications of these emerging use-cases. Deep industry expertise will be required among regulators as they collaborate with companies and citizens to shape this inevitable future.

The most fruitful approach toward AI regulation requires industry and federal government working groups to collaborate on use-case specific regulation. However, broader, international mandates will likely be less effective at this early juncture.  Every country and most every industry are thinking about their AI use-cases strategically and more likely from a geopolitical perspective. We are entering a period in which the commanding heights of geopolitics will not be defined by nuclear proliferation, but AI proliferation.

In these uncertain times, businesses are impacted by unprecedented, exogenous forces including the current COVID-19 global pandemic. It’s particularly in these moments that society needs technological innovation to progress forward. The sheer scope of AI’s impact on society does require some regulation. It will be essential to ensure that AI regulation is fair and ethical while not impeding innovation and our geopolitical ambitions. The various regulatory agencies will need to bring expertise, flexibility, and a learning mindset to approach AI as the most significant opportunity of the 21st century.

George_Mathew_Kespry.jpg

George Mathew is the chairman and CEO of Kespry, an aerial intelligence company, based in Menlo Park, CA. His focus is on leading the company’s mission to transform how people capture, analyze, and share insights about their asset intensive businesses.

About the Author(s)

Guest Commentary

Guest Commentary

The InformationWeek community brings together IT practitioners and industry experts with IT advice, education, and opinions. We strive to highlight technology executives and subject matter experts and use their knowledge and experiences to help our audience of IT professionals in a meaningful way. We publish Guest Commentaries from IT practitioners, industry analysts, technology evangelists, and researchers in the field. We are focusing on four main topics: cloud computing; DevOps; data and analytics; and IT leadership and career development. We aim to offer objective, practical advice to our audience on those topics from people who have deep experience in these topics and know the ropes. Guest Commentaries must be vendor neutral. We don't publish articles that promote the writer's company or product.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights