Big data has nearly as many definitions as the number of consultants claiming to be big data experts. The best way to define big data is to talk to people who are using vast--and previously-too-big-for-timely-analysis—databases. They're creating new products and services and uncovering information once thought unreachable by all but the biggest computing resources.
In this exclusive video interview, I speak with Dr. Gordon Springer, associate professor in the computer science department at the University of Missouri and scientific director of the UM Bioinformatics Consortium. It's tough to imagine anything much bigger than genetic modeling data. In fact, Springer's department generates up to six terabytes of data a week in conducting its leading-edge research in bioinformatics, genetics, and environmental modeling. To keep up with all that data, Springer uses Appistry Inc.'s Ayrris/BIO software, which lets the university run parallel analyses for faster processing.
Springer talks about the university's new capability to not only collect these huge data samples and analyze them, but to use them to create world-changing scenarios. For instance, Springer's current research projects include creating new disease-resistant plants that can be used to help feed a burgeoning world population.
It is the ability to not just build big data systems but to develop the analysis systems around that data that will continue to drive interest in big data for research, business, and scientific communities.
VP and Editorial Analyst, InformationWeek
The Enterprise Connect conference program covers the full range of platforms, services, and applications that comprise modern communications and collaboration systems. It happens March 26-29 in Orlando, Fla. Find out more.