Dashboard: IBM Taps Silos with Master Data Management

IBM unveils new Master Data Management (MDM) tools that aim to solve the old problem of poor data quality and consistency.

InformationWeek Staff, Contributor

May 24, 2006

4 Min Read

You may never actually achieve a "single version of the truth," but that hasn't kept vendors from dreaming up new ways to attack the problem. In May, IBM unveiled new Master Data Management (MDM) tools that aim to solve the old problem of poor data quality and consistency.

IBM's two major MDM offerings, WebSphere Product Center and WebSphere Customer Center, are the latest versions of technology gained when the company acquired Trigo (two years ago) and DWL (last August), respectively. Both products operate under the same MDM rubric: Rid businesses of their disparate, application-specific data silos and thus eliminate information irregularities. MDM would replace those silos with an enterprise-wide repository, horizontally layered, that cleans incoming data and acts almost like a vast company dictionary. All the attributes that describe particular products or customers--the rules, semantics and contexts surrounding each core business object--would be managed by the repository and presented consistently across the company and to trading partners.

The WebSphere MDM tools are linked to a company's disparate applications through service-oriented architecture (SOA). The knock on SOA has been the difficulty of getting at clean, consistent data. IBM is among the vendors that have "realized that you need an MDM layer to achieve the success promised by SOA," says Gartner analyst Andrew White, since MDM accomplishes the cleaning.

Many of WebSphere Product Center's early adopters have been retailers and other companies that deal with masses of product information and complicated supply chains. Among those retailers is Carquest, the auto parts retailer, which is currently negotiating licenses for WebSphere Product Center. It plans to use the software to help it standardize parts information and sync that data with vendors and suppliers. The company currently uses a homegrown system, with information scattered around the organization, often locked in Excel spreadsheets and sometimes on individual employees' PCs, according to Joe Zucchero, Carquest's chief information officer. Furthermore, product data coming from vendors has to be cleaned by a firm that specializes in auto parts data.

Zucchero says Carquest chose IBM's software in part because it requires limited IT support and because many of the retailer's suppliers and vendors are using the same tools.

Each part that Carquest handles has up to 120 attributes or data points--from weights, dimensions and functions to images of the equipment. In contrast to some businesses, the auto parts industry has a set of data standards, so Carquest won't need to create from scratch a set of rules to govern the data.

Zucchero expects to roll out a pilot project built on WebSphere Product Center by the end of the year and to go live with its trading partners in early 2007. Assessing potential pain points, Zucchero says he'll be keeping an eye on how the software handles unstructured data. To process that information, Carquest had to purchase DB2 Content Manager software to supplement the WebSphere Product Center. "It wasn't a huge additional cost," he says. "But it wasn't free, either." --Scott Eden


Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like

More Insights