informa
/
3 MIN READ
Commentary

Teradata Adds to a Growing Portfolio

Teradata introduced the Teradata 1550 "extreme data appliance" at its conference this week. The appliance starts at 50 TB (based on compression) for a single node and can scale to 50 PB (theoretical data size). The 1550 is aimed at the very-large data volume problem, not so much typical data warehouse usage... The Teradata product family now covers everything from smaller projects to real-time large enterprise data warehouses.
Teradata introduced the Teradata 1550 "extreme data appliance" at its user conference this week. The appliance starts at 50 TB (based on compression) for a single node and can scale to 50 PB (theoretical data size). This appliance is positioned to deal with the very large data volume problem, not so much for typical data warehouse usage.

When you look at data usage, there are two types of large data problems. The classic DW model involves analyzing subsets of the total data, and occasionally scanning all the data. The other model is the need to analyze very large data sets that would normally be impractical, like looking at a year of web traffic or call details.The Teradata product family now covers the entire platform range from smaller projects (subject area marts, smaller warehouses) to real-time large enterprise data warehouses. A plus with these offerings is that they run the same database across the entire product family. They've been very open about their product capabilities and pricing, something some of the appliance vendors could learn from.

chart

Teradata has a strong competitive position in the market. Most of the appliance vendors are still in venture-funded startup mode. In an uncertain financial market this means they have to work harder to preserve cash since the likelihood of closing new funding rounds is low, as it an IPO or acquisition. DATAllegro was lucky to get out of the market when they did. If the investment funds stay tight for the next few quarters, we could see some of the companies with lower cash positions run into trouble.

Teradata also announced the next major release of the database, improving performance and manageability (the types of things you'd expect in any major release). They added new features like automatic sensing of data temperature so data placement can be optimized, geospatial capabilities, and improved workload management features.

They talked about solid-state disks to augment performance as part of the "virtual storage" announcement, allowing SSD and spinning disks to be used together with software that moves data to and from SSD based on performance rules.

They showed a prototype storage array that uses 128 GB solid state drives, and Toshiba recently announced that they have been able to build 256 GB SSDs with a read rate of 120 MB/sec. Solid state disks offer incredible performance under random I/O workloads but Teradata's experience is that solid state disks don't perform that much better than spinning disks when it comes to database scans. This makes sense since you can only access the data as fast as the drive controllers and channels can move the data. There's significant work ahead to change how I/O subsystems perform before we can boost channel performance to match that SSD read rates.

Based on the price of solid state disks and the engineering challenges to improve both I/O subsystem hardware and software performance, don't expect vendors to be introducing racks of SSDs any time soon. In a few years it will be more common to see blends of SSD and spinning drives to boost performance.Teradata introduced the Teradata 1550 "extreme data appliance" at its conference this week. The appliance starts at 50 TB (based on compression) for a single node and can scale to 50 PB (theoretical data size). The 1550 is aimed at the very-large data volume problem, not so much typical data warehouse usage... The Teradata product family now covers everything from smaller projects to real-time large enterprise data warehouses.