There are more discussions about AI ethics and responsible AI these days, but companies need to be clear about potential AI liability issues.

Lisa Morgan, Freelance Writer

February 22, 2021

7 Min Read
Image: Sikov - stock.adobe.com

As artificial intelligence moves deeper into enterprises, companies have been responding with AI ethics principles and values and responsible AI initiatives. However, translating lofty ideals into something practical is difficult, mainly because it's something new that needs to be built into DataOps, MLOps, AIOps and DevOps pipelines.

There's a lot of talk about the need for transparent or explainable AI. However, less discussed is accountability, which is another ethical consideration. When something goes wrong with AI, who's to blame? Its creators, users, or those who authorized its use?

"I think people who deploy AI are going to use their imaginations in terms of what could go wrong with this and have we done enough to prevent this," said Sean Griffin, a member of the Commercial Litigation Team and the Privacy and Data Security Group at law firm Dykema. "Murphy's Law is undefeated. At the very least you want to have a plan for what happened."

Sean_Griffin_Dykema.jpg

Actual liability would depend on proof, and it would depend on the facts of the case. For example, did the user utilizes the product for its intended purpose(s) or did the user modify the product?

Might digital marketing provide a clue?

In some ways, AI liability is kind of like the multichannel attribution concepts used in digital marketing. Multichannel attribution arose out of an oversimplification, which was "last click attribution." For example, if someone had searched for a product online, navigated a few sites and then later responded to a pay per click ad or an email, then the last click leading to the sale received 100% of the credit for the sale when the transaction was more complicated. But how does one attribute a percentage of the sale to the various channels that contributed to it?

Similar discussions are happening in AI circles now, particularly those focused on AI law and potential liability. Frameworks are now being created to help organizations translate their principles and values into risk management practices that can be integrated into processes and workflows.

HR bots

More HR departments are using AI-powered chatbots as the first line of candidate screening because who wants to read through a sea of resumes and interview candidates that aren't really a fit for the position?

"It's something I'm seeing as an employment lawyer. It's becoming used more in all phases of employment from job interviews through onboarding, training, employee engagement, security and attendance, said Paul Starkman a leader in the Labor & Employment Practice Group at law firm Clark Hill. "I've got cases now where people in Illinois are being sued based on the use of this technology, and they're trying to figure out who's responsible for the legal liability and whether you can get insurance coverage for it."

Paul_Starkman-ClarkHill.jpg

Illinois is the only state in the US with a statute that deals with AI in video interviews. It requires companies to provide notice and get the interviewee's express consent.

Another risk is that there still may be inherent biases in the training data of the system used to identify likely "successful" candidates.

Then there's employees monitoring. Some fleet managers are monitoring drivers' behavior and their temperatures.

"If you suspect someone of drug use, you've got to watch yourself because otherwise you've singled me out," said Peter Cassat, a partner at law firm Culhane Meadows.

Of course, one of the biggest concerns about HR automation is discrimination.

"How do you mitigate that risk of potential disparate impact when you don't know what factors to include besides to include or exclude candidates??" said Mickey Chichester Jr., shareholder and chair of the robotics, AI and automotive practice group at law firm Littler. "Involve the right stakeholders when you're adopting technology."

Michael_Chichester-Littler.jpg

Biometrics

No data is more personal than biometrics. Illinois has a law specific to this called the Biometric Information Protection Act (BIPA), which requires notice and consent.

A famous BIPA case involves Facebook, which was ordered to pay $650 million in a class action settlement for collecting the facial recognition data of 1.6 million Illinois residents.

"You can always change your driver's license or social security number, but you can't change your fingerprint or facial analysis data," said Clark Hill's Starkman. "[BIPA] is a trap for unwary employers who operate in many states and use this kind of technology. They can get hit with class actions and hundreds of thousands of dollars in statutory penalties for not following the dictates of BIPA."

Autonomous cars

Autonomous cars involve all kinds of legal issues ranging from IP and product liability to non-compliance. Clearly, one of the key concerns is safety, but if an autonomous vehicle ran over a pedestrian, who should be liable? Even if the car manufacturer was found solely responsible for an outcome, that car manufacturer might not be the only party bearing the burden of the liability.

"From a practical standpoint, a lot of times a car manufacturer will tell the component manufacturers, 'We're only going to pay this amount and you guys have to pay the rest,' even though everybody recognizes that it was the car manufacturer that screwed up," said David Greenberg, a partner at law firm Greenberg & Ruby. "No matter how smart these manufacturers are, no matter how many engineers they have, they're constantly being sued, and I don't see that being any different when the products are even more sophisticated. I think this is going to be a huge field for personal injury [and] product liability lawyers with these various products, even though it might not be a product that can cause catastrophic injuries."

David-Greenberg-GreenbergRudy1.jpg

Intellectual property

IP law covers four basic areas: patents, trademarks, copyrights, and trade secrets. AI eclipses all those areas depending on whether the issue is functional design or use (patents), branding (trademarks), content protection (copyrights) or a company's secret sauce (trade secret). While there isn't ample space to talk about all the issues in this piece, one thing to think about is AI-related patent and copyright licensing issues.

"There's a lot of IP work around licensing data. For example, universities have a lot of data and so they think about the ways they can license the data which respects the rights of those from which the data was obtained with its consent, privacy, but it also has to have some value to the licensee," said Dan Rudoy, a shareholder at IP law firm Wolf Greenfield. "AI includes a whole set of things which you don't normally think about when you think of software generally. There's this whole data side where you have to procure data for training, you have to contract around it, you have to make sure you've satisfied the many privacy laws."

As has been historically true, the pace of technology innovation outpaces the rate at which governmental entities, lawmakers and courts move. In fact, Rudoy said a company may decide against patenting an algorithm if it's going to be obsolete in six months.

Bottom line

Companies are thinking more about the risks of AI than they have in the past and necessarily the discussions need to be cross-functional because technologists don't understand all the potential risks and non-technologists don't understand the technical details of AI.

"You need to bring in legal, risk management, and the people who are building the AI systems, put them in the same room and help them speak the same language," said Rudoy. "Do I see that happening everywhere? No. Are the larger [companies] doing it? Yes."

 

Follow up with these articles about AI ethics and accountability:

AI Accountability: Proceed at Your Own Risk

Why AI Ethics Is Even More Important Now

Establish AI Governance, Not Best Intentions, to Keep Companies Honest   

 

About the Author(s)

Lisa Morgan

Freelance Writer

Lisa Morgan is a freelance writer who covers big data and BI for InformationWeek. She has contributed articles, reports, and other types of content to various publications and sites ranging from SD Times to the Economist Intelligent Unit. Frequent areas of coverage include big data, mobility, enterprise software, the cloud, software development, and emerging cultural issues affecting the C-suite.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights