One constant theme running through IT is the quest to get everything to connect with everything else, seamlessly and without problems. The world of big data took a step in this direction with the announcement of Apache Arrow by the Apache Software Foundation.
The Apache Software Foundation expects Apache Arrow to boost the performance of analytical workloads by a hundred-fold, as well as cutting communications overhead. Arrow will go forward as a "top level project," skipping the incubation period.
Nadeau's company, Dremio, is a bit of a stealth big data startup, which has specialized in the Apache Drill project up until now. Nadeau and Dremio co-founder and CEO Tomer Shiran came to the company from MapR, an Apache Hadoop distribution company.
[eBay recently contributed a big data development to the Apache Software Foundation. Read How EBay's Kylin Tool Makes Sense Of Big Data.]
Arrow grew out of a need for improved performance in big data processing that many users were experiencing, Nadeau said. He explained the details in a blog post today and in an interview with InformationWeek.
"The core of Arrow is making processing systems faster," Nadeau continued. Arrow does this by enabling different big data components to talk to each other more easily. It does this by creating an internal representation of each big data system component so that data does not have to be copied and converted as it moves from Spark to Cassandra, or from Apache Drill to Kudu, for example.
Arrow also features columnar in-memory complex analytics. This is basically fusion of columnar data storage (like that provided by Apache Parquet), with systems that hold data in memory (like SAP HANA and Apache Spark), adding complex hierarchical and nested data structures (like JSON). Nadeau said that systems typically can support one of these three, a few will support two, but Arrow is the first to support all three. And it is all open source.
Further gains in processing speed are achieved by using CPUs more efficiently, Nadeau explained. "A lot of CPU cycles are wasted moving data between systems."
Arrow improves on CPU performance by lining up data to match CPU instructions and cache locality, thus streamlining the flow of data into the CPU. The CPU can stick to processing rather than searching and pulling data from the cache.
This data alignment also permits use of superword and SIMD Instructions (Single Instruction Multiple Data), which also boosts performance. SIMD executes multiple operations in a single clock cycle, increasing performance by two orders of magnitude, according to ASF. Optimizing cache locality, data pipelines, and SIMD, performance gains of 10x to 100x can be achieved, Nadeau said.
Apache Arrow should come into its own as users tap different tools for different missions in the realm of big data, Nadeau pointed out. "We are making each work load more efficient…[Arrow] will change a lot of things," he said.
Rising stars wanted. Are you an IT professional under age 30 who's making a major contribution to the field? Do you know someone who fits that description? Submit your entry now for InformationWeek's Pearl Award. Full details and a submission form can be found here.William Terdoslavich is an experienced writer with a working understanding of business, information technology, airlines, politics, government, and history, having worked at Mobile Computing & Communications, Computer Reseller News, Tour and Travel News, and Computer Systems ... View Full Bio