Oracle and IBM trade claims while independents lead the big-data revolution.
If the name "Exadata" really fits, are there any namable/quotable customers managing hundreds of terabytes, let alone petabytes?
On the business intelligence front, Oracle 11g seems like a pretty conservative release; as analysts and competitors have questioned, why no in-memory technology, text analytics capabilities or sign of a cloud strategy?
Also in the BI realm, Oracle has Siebel-style analytics and analytic applications (offering trending off of historical data), but Oracle doesn't seem to offer anything in the (SAS- and IBM SPSS-led) realm of advanced predictive and statistical analytics. Does Oracle have offerings that go unrecognized or does it have plans to add advanced analytics capabilities?
Oracle has a strong Master Data Management (MDM) portfolio, but the recent SilverCreek acquisition points to gaps in the data-quality area, widely identified as the number-one obstacle to successful BI deployments. What's coming from Oracle to address data quality and data governance?
To give you some idea of the depth and maturity I'm seeing from independent vendors, yesterday I interviewed Eric Williams, Executive VP and CIO at Catalina Marketing. It's a giant company that has many, many large databases and several Netezza deployments, the first of which went live nearly seven years ago.
One of Catalina's largest databases holds the history of the majority of customer loyalty program checkout transactions in the U.S. over the past three years. The company has 2.5 petabytes of information in total.
Three years ago (when Exadata was still on the drawing boards), Catalina started working with SAS and Netezza to move the company's scoring of purchase behavior models into the Netezza database for faster processing. As a result, models that used to take half a day to process can now be scored in 60 seconds.
With this performance boost, Catalina expects to be able to develop and test 600 models per year with the same staff that used to deliver 40 to 50 new models per year.
The coupons that are printed and handed to you with your receipt when you check out at a supermarket are generally delivered by Catalina. Before the project described above, those coupons were printed based on simplistic rules like, "he's buying dog food so give him a dog food offer" or "she's buying diet soda, so serve up a diet soda coupon."
Today, every coupon printed is unique to the individual customer, customized based on three years' worth of purchase history (assuming they are using a loyalty card). The models spot latent correlations that are incredibly powerful.
Without targeting, redemption rates on coupons are around 1%. With basic targeting (like the dog food/diet soda examples above) redemption rates rise to 6% to 10%. With the predictive models Catalina is now using, redemption rates are as high as 25%.
These are the kinds of bottom-line, dollars-and-cents statistics that are far more compelling than feeds and speeds. It's time for Oracle to talk about customer deployments rather than the sales pipeline.
The Agile ArchiveWhen it comes to managing data, donít look at backup and archiving systems as burdens and cost centers. A well-designed archive can enhance data protection and restores, ease search and e-discovery efforts, and save money by intelligently moving data from expensive primary storage systems.
2014 Analytics, BI, and Information Management SurveyITís tried for years to simplify data analytics and business intelligence efforts. Have visual analysis tools and Hadoop and NoSQL databases helped? Respondents to our 2014 InformationWeek Analytics, Business Intelligence, and Information Management Survey have a mixed outlook.