re: SAP Says Hana Tests Well On Performance, Scalability
I think the real concern for people considering HANA for future RDBMS is reliability and resiliency. If you lose a server or have a memory error (which is not uncommon), you lose all of that data because there is no persistent state in DRAM. This is not a major problem for OLAP, data warehouse workloads because you can just reload the read only batch, but it is a real problem for OLTP workloads which constantly have incoming I/O. It is not as though the database would be gone permanently, but you would have to reload it from some snap copy on hard disk, ensure that the data is synchronized, and then restart it which would mean downtime. Maybe this can be resolved by clustering and mirroring data around a bunch of servers in a Hadoop style configuration, but, if that is possible, I haven't heard anything about it.
A second issue for HANA is that SAP writes about near real time responses as though they always hugely beneficial. In some trading application or massive analytics application, it will be an advantage. For the majority of OLTP workloads, it is not clear that anyone would achieve some unique business advantage by being able to get to a SAP general ledger record or benefits enrollment screen half a second faster than before. If the price is the same or less, then you might as well have faster response times. If that $600,000 number is correct, that would probably be lower than the cost to buy a tier one 100 TB SAN alone. IBM's X5 gear is ideal for this workload with the IBM memory expansion technology, so it might be possible.