How TD Bank Is Transforming Its Data Infrastructure
TD Bank has embarked on an effort to transform traditional banking data infrastructure into a more modern system built on a new application framework and APIs. This system leverages TD Bank's data to deliver insights on-demand.
9 Free Online Courses To Pump Up Your Big Data, Analytics Skills
9 Free Online Courses To Pump Up Your Big Data, Analytics Skills (Click image for larger view and slideshow.)
Banking can be a conservative industry with many regulations and reporting requirements to satisfy, and yet it is also the hub of business transactions. Banks have the potential to monitor and analyze transactional data, and even provide advice on how to optimize operations for businesses that do a high volume of transactions.
TD Bank's data potential helped bring Mok Choe to the firm two years ago from his previous position at Commonwealth Bank of Australia. Choe is SVP and chief data architect at TD Bank. He has spent the last 24 months modernizing the organization's data infrastructure to take advantage of the opportunities of a new age.
TD Bank has more than 1,300 branches in the US, and nearly 1,200 branches in Canada, where it is based and operates as Toronto-Dominion Bank. TD Bank in the US is a subsidiary of the Canadian company.
[Looking for more digital transformation stories? Check out a collection of the best at InformationWeek's Elite 100 2016.]
Choe told InformationWeek in an interview that he calls this new era Banking 3.0. It goes well beyond the old days, when consumers would come into the bank branch office (Banking 1.0), or would interact with the company via a website (Banking 2.0). The bank of the future will go where the consumer is, he said.
His effort is to create data-as-a-service, available to his internal users, such as the marketing organization within TD Bank. That's because TD Bank, like so many other organizations across all traditional industries, is facing new competition from digital native startups.
To compete, the banking company's marketing department needs better customer data, and it needs that data much faster than it ever has. He said that the Holy Grail is to also be able to offer services to external customers, too. But Choe's job is focused on creating the next-generation data infrastructure for fast, accurate delivery of data.
To get from here to there is no easy task. So many banks today are still operating on systems built for these earlier generations of banking. The conservative nature of banking and the regulations can represent a drag on progress.
Moving and Transforming Data
By Choe's estimates the banking industry today spends about 50% of its time and money on getting data where it needs to be and getting access to that data.
The data in banking is relational, and it's all about transactions. Banks "flatten" the data overnight, and then they unflatten it in the morning and put it into an ETL (extract, transform, and load) environment, which is relational, he said. Then they put it into a data warehouse or a data mart.
Each of these cycles represents an enormous investment of time and money, he said. The mission, then, is to transform how banks work with their data to streamline those cycles and make the data and analysis available to those who need it, when they need it.
Technologies for a New Era
Choe's group focused on a handful of technologies, including:
creating a cloud computing architecture for the data
an application framework (rather than a monolithic code base)
a set of APIs, to build an on-demand architecture for banking data
The application framework and API strategy are designed to simplify development and maintenance of the code base. The framework enables the creation of "micro apps." These are small, single function apps. For instance, on the consumer side there may be one for bill pay, or one that offers a transaction history. These apps are independent of one another, but they can be combined to compose a more complex app that offers multiple functions.
The data itself is really the biggest part of TD Bank's overall transformation strategy to provide data-as-a-service to internal users.
"You get the data that you need, self-service, on-demand," Choe said.
Vendor Partners
TD Bank is operating much of this in a private cloud, using technologies from vendors including Hadoop distribution company Cloudera, Podium (for its data lake), and datawarehouse company Teradata. It is also using Angular, JavaScript, and Hadoop, among other technologies.
Choe said the the cloud transition has been completed. His group continues to explore additional capabilities. Now that data-as-a-service is in production, his organization is working on a project to offer reports-as-a-service or analytics-as-a-service to internal customers who are looking for business intelligence insights.
The process has been a pioneering, trailblazing effort, because the infrastructure TD Bank created wasn't really available before.
"What is interesting about all these things is that very few of them were in existence before we started," Choe said. "They didn't exist in some big technology vendor's catalog. We had to tell these vendors that this is what we wanted to do."
Future Directions
That's likely the approach that Choe's team will end up taking in the future as it explores additional ways for banks to leverage data to create value in the broader marketplace.
For instance, one project might look at harvesting much more data, including SKU data, from retail transactions at smaller retailers, and providing analytics and prescriptive insights to merchants participating in the service. While big merchants guard such data as proprietary, smaller merchants could regard the process as a value-added service.
"It's time to start leveraging what we have," Choe said. "It's time to shift to value creation."
About the Author
You May Also Like