In the first real test of AI in a crisis, the results are mixed. While many individual applications are helping, the tech remains immature and unable to address complex public policy issues.

Guest Commentary, Guest Commentary

May 12, 2020

6 Min Read
Image: Pixabay

One common belief about AI is that it will help solve the most complex problems in the world. A great example is nuclear fusion, a longtime grail goal, as it would offer virtually unlimited power with little or no pollution. With episodes of hope and hype, the realization of this goal has been elusive. Now it appears that AI may be the key, with a working demonstration expected within five years. If true, commercial viability could happen within a couple of decades.

Given the promise of AI, it is not surprising that in a pandemic all eyes have turned to this technology for a vaccine or effective treatment. However, most AI applications learn from large amounts of data, and with a “novel” virus, this information is in short supply. What data is available may not provide an accurate picture and that picture may change over time as more information is collected and correlated. For instance, months into the pandemic, the Centers for Disease Control added six new symptoms for the disease.

The downstream results of this are inaccurate models of virus spread and mortality prediction and also questions about its diagnostic contributions, leaving humanity to fly mostly blind. Basically, garbage in, garbage out. While AI has assisted in fighting the pandemic with disinfecting robots and the delivery of supplies within hospitals, AI leader Kai-Fu Lee gives AI a B- so far for its contribution towards fighting the virus.

What does this say about the future, where so much expectation exists that AI is going to be a panacea? In an Edelman AI survey last year, an oft-appearing phrase in verbatim responses was roughly: “Given the state of the world, looking forward to our AI overlords.” Much as benign super intelligent beings show up in Childhood’s End by Arthur C. Clarke, could AI be in charge of the pandemic response?

The ethics of societal AI

In a pandemic, decisions are often a matter of life and death.

Similarly, autonomous vehicles must deal with just such matters and highlight the challenge. With vehicle-to-vehicle and vehicle-to-infrastructure communications, an automated system would have highly accurate data such as the speed and position of nearby vehicles and current road conditions. This would enable the system to make a rapid decision in response to an unexpected event such as a child running in front of the vehicle. The idea is that, with complete data, errors of judgment by humans would be eliminated, greatly reducing fatalities. 

Yet, the decision making for these vehicles is fraught with moral implications. A recent article poses one such issue -- being forced to choose between killing a car's passengers by hitting a tree or veering into a nearby group of pedestrians. How does an algorithm make that choice? This requires an ethical code of conduct, a behavioral guide for autonomous vehicles that codifies how choices are to be made. Germany is the first country to have established such a model, delineating the decision tradeoffs. For example, survival priority would be determined based on the vulnerability of road users -- pedestrians first, then cyclists, followed by cars carrying passengers and then commercial vehicles.

If an autonomous vehicle from Germany crossed into France, would different ethical rules then apply beginning at the border? In the US, driving laws vary somewhat state-to-state. For example, several states including Oregon allow a left turn during a red light under certain conditions. But neighboring California does not. An autonomous vehicle would seemingly need to use differing rules depending upon location.

AI leading the task force

Managing decisions in a pandemic would be harder still. For example, COVID-19 disproportionately impacts older people and those with “underlying medical conditions.” Should they be more or less highly prioritized for protection and care than others and at what cost to the individuals involved and to society?

A recent Harvard Business Review article outlines how an AI system -- by using machine-learning models based on multiple sources of data including a person’s medical history -- could evaluate the probability that someone would fall into a high-risk category for the virus. The output from the system would determine who would need to be sequestered and who would be able to live normally.

While there is logic to the proposition, it would also require suspending HIPAA rights in the US and other forms of privacy protection elsewhere. To achieve such a system would require a massive amount of individual consent or government intrusion on privacy. Short of that, the authors argue for a declaration by the WHO or UN that a pandemic serve as a trigger to suspend normal privacy laws to enable the sharing of anonymized data. Human rights and civil liberties may be ignored and damaged in so doing, and once done there may be no turning back. An even bigger question: Is it even possible for an AI system to effectively model the tradeoffs of public health, medical ethics, economics, stakeholder groups and politics?

Beyond the thorny policy issues, there is a more basic issue. Much of the AI technology is relatively immature and unproven, more nuanced than generally thought and will probably be more helpful in ensuring we can beat the next epidemic than in fighting this one. It also remains an open question whether a system consisting of narrow AI tools for spotting epidemics, modeling their spread and impact, developing treatments and vaccines, performing individual diagnoses, undertaking surveillance and contact tracing can ever be knitted into an effective system for combatting a pandemic.

Where next

Many of these systems and implementations are country or vendor specific, meaning they are not always designed for universal integration and could lead to AI islands of automation. Perhaps recognizing the limitations of current AI technologies and collaboration methodologies, a multi-disciplinary group is forming “to support a data-driven approach to learning from the different approaches countries are taking to managing the pandemic.”

The pandemic has effectively shown where current AI can help such as identifying vaccine and therapeutic drug candidates but also limitations. One thing we have learned is that any problem that is “novel” suffers from a lack of data, so unless there is a clear, pre-existing parallel, the contribution of data hungry AI technologies will be less than hoped. Our AI overlords will have to wait until there are further technology advances, leaving us to carry on as best as possible with our human limitations.

Gary_Grossman-Edelman.jpg

Gary Grossman is Senior Vice President and Technology Practice Lead, Edelman AI Center of Excellence.

About the Author(s)

Guest Commentary

Guest Commentary

The InformationWeek community brings together IT practitioners and industry experts with IT advice, education, and opinions. We strive to highlight technology executives and subject matter experts and use their knowledge and experiences to help our audience of IT professionals in a meaningful way. We publish Guest Commentaries from IT practitioners, industry analysts, technology evangelists, and researchers in the field. We are focusing on four main topics: cloud computing; DevOps; data and analytics; and IT leadership and career development. We aim to offer objective, practical advice to our audience on those topics from people who have deep experience in these topics and know the ropes. Guest Commentaries must be vendor neutral. We don't publish articles that promote the writer's company or product.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights