For data deduplication to work well, it needs to be tightly integrated into the existing operating system of the disk itself. If you have a storage array OS whose source code is more than 3 years old, integrating a dramatically new way of placing data on that disk is going to be complex. The workaround to this problem is post-process deduplication, which individually analyzes each file to compare it with blocks of data that the system already has stored to determine redundancy--a time-consuming process (see story, "With Data Deduplication, Less Is More"). Another challenge with this method is that it creates two storage areas to manage: an area that's waiting to be examined for duplicates, and an area for data after it's been examined.
One of the benefits of deduplicated systems is that they store only unique data segments, replicating only new segments to remote locations. With the post-process method, you have to wait until the deduplication step is complete until data can be replicated. This can delay the update of the disaster-recovery site by six to 10 hours.
As a result, companies such as Data Domain, Permabit, and Diligent Technologies that started with data deduplication as a core part of their technology have a distinct advantage. Other vendors will have to make post-process data deduplication much more seamless, exit from deduplication altogether, or rewrite their code bases to support in-line deduplication.