AWS, Anthropic, and the DTCC Talk the Labors of Responsible AI

Michael Kearns, Michael Gerstenhaber, and Johnna Powell discussed at the AWS Financial Services Symposium approaches to keep AI operating fairly and on the level.

Joao-Pierre S. Ruth, Senior Editor

June 7, 2024

3 Min Read
Michael Kearns, Michael Gerstenhaber, and Johnna Powell discuss responsible AI at the AWS Financial Services Symposium.
Michael Kearns, Michael Gerstenhaber, and Johnna Powell discuss responsible AI at the AWS Financial Services Symposium.Photo by Joao-Pierre S. Ruth

On Thursday in New York, the AWS Financial Services Symposium hosted a panel on “Responsible AI” where a trio of stakeholders discussed how to inject the technology with some fair-mindedness.  

Michael Kearns, Amazon scholar and professor in the computer and information science department at the University of Pennsylvania, moderated the chat with Michael Gerstenhaber, vice president of product with Anthropic, and Johnna Powell, managing director of technology research and innovation with the Depository Trust & Clearing Corporation (DTCC).

In their conversation about challenges and opportunities in deploying AI responsibly in the financial services world, Kearns said his role at AWS includes overseeing operational work around responsible AI. He and his team put technical, operational procedures in place, he said, to audit their training model for concerns such as demographic bias or privacy needs. “Now with generative AI, there’s a whole new host of possible concerns such as hallucinations and things that aren’t exactly privacy concerns but are adjacent, like intellectual property,” Kearns said.

Finding a path to responsible AI can be a very collaborative effort within organizations. Powell said DTCC started its GenAI journey about a year ago, tasked with defining a strategy to go forward. “We launched a massive survey across the DTCC; we did a bunch of internal research, external research, surveys, and so on,” she said. “We came up with about 400 use cases and had to sift through all of them.” After combing through those use cases, they were gauged against such criteria as feasibility, then winnowed down further to what she called a few powerful use cases.

Related:Is Innovation Outpacing Responsible AI?

“Optimizing productivity was the one we focused on most,” Powell said. That included developer productivity and legacy code modernization. A lot of the use cases, she said, focused on the key theme of synthesis -- of taking in data, summarizing it, and putting it into digestible formats.

Anthropic, whose founders include expatriates from OpenAI, is a San Francisco-based startup that studies AI safety elements. That includes developing research on AI -- its risks and opportunities -- to develop reliable AI systems. “My primary goal is to enable engineers to use generative AI safely,” Gerstenhaber said.

Policymakers and regulations could be part of shaping responsible AI, however, the policy landscape is less than uniform when it comes to AI. “I come from the digital assets world where regulation, or the lack of regulation, can really screw you up in terms of innovation progression,” Powell said. “Japan is well-advanced in the regulatory landscape and digital assets.” She said Japan has already made moves to draft guidelines about AI and copyright issues. “We’re still struggling in the US,” Powell said she hopes to see clarity in domestic regulations in these areas, though without policies becoming so restrictive they stall progress.

Kearns asked about guardrails versus training for AI, in particular to try and clamp down on biases that can get baked into how a model is trained. Though he said many models have gotten better about bias, it is not necessarily because of how they were trained.

Gerstenhaber touted the value of Constitutional AI, which is Anthropic’s method that gets models to adhere to a written list of principles -- a constitution to follow for its responses. He said while steps can be taken when gathering what is believed to be “safe” training data, Constitutional AI can perform automated assessments quickly to get AI to perform responsibly. “I’m extremely bullish on the idea that we can enforce these things in training, that we can provide that level of safety as a service,” he said.

Powell’s closing thoughts on the responsible deployment of AI included a bit of a reality check that might allay some assumptions that surround the technology, but she remained grounded in its inevitable presence within organizations. “I like to say to people ... AI and these training models are not coming to take over your job, but someone who uses AI might,” she said. “It is important to get the right tools in the right hands and democratize AI across the company.”

About the Author

Joao-Pierre S. Ruth

Senior Editor

Joao-Pierre S. Ruth covers tech policy, including ethics, privacy, legislation, and risk; fintech; code strategy; and cloud & edge computing for InformationWeek. He has been a journalist for more than 25 years, reporting on business and technology first in New Jersey, then covering the New York tech startup community, and later as a freelancer for such outlets as TheStreet, Investopedia, and Street Fight.


Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights