Even though you could find plenty of cloud providers that offered data storage and analytics services to enterprise customers, early big data projects frequently took place in-house. This fact led to many failed big data projects that, in many ways, tarnished the image of what big data analytics can do.
But as more and more big data success stories continue to trickle out, a picture starts forming. That picture showed that big data analytics will indeed be the key to business success in a digitally transformed world -- and that you're more likely to succeed if your data and analytics happen in the cloud.
There is plenty of data to back up the fact that data analytics will play a huge role in the competitiveness of businesses in the coming years. What's less understood is why big data is more successful in the cloud when compared to on-premises. In this article, we'll explore those reasons, as well as point out why big data in the cloud has finally reached a tipping point in terms of enterprise adoption.
Big data: A big project
Most enterprise IT shops that attempted a big data project in house knew that it wasn't going to be easy. But even with a great deal of planning and grit, projects failed because the project simply consumed too many hours and required skill sets that, at the time, were scarce. Even in 2017, data admins that are proficient in big data platforms such as Hortonworks or MapR can basically write their own ticket. Many projects failed to complete because they experienced significant "brain drain" as their big data platform talent moved on to greener pastures.
Understanding the role of the data scientist
While there were significant shortages on the infrastructure side, the real downfall of in-house big data projects was found on the analytics side. After all, this was really enterprise IT's first foray into true, in-depth data sciences at such a scale. IT leaders attempted to shield those that analyzed data from those that managed the operations side of the house. In reality, data scientists are better off working in conjunction with the operations team -- no matter if it's internal staff -- or a cloud provider. Doing so allows data scientists to better tune their models, streamline processes and ultimately, speed up the amount of time required to squeeze insightful information from mountains of data.
Why we've reached a tipping point
Despite the clear benefits of outsourcing the care and feeding of big data platforms and databases, the real reason why enterprises are moving their big data efforts to the public cloud is because that's where most of their data is, or will soon reside. No matter how you slice it, moving massive amounts of data is a daunting feat. But over time, data for many enterprise companies organically migrated as IT departments began leveraging the low cost of cloud storage.
In a way, big data five or 10 years ago was simply ahead of its time. Not only did we misjudge the complexities of building and implementing a big data platform, we misunderstood critical big data roles. Leveraging the cloud to support the underlying big data infrastructure lets in-house IT departments focus on what’s important, highly tuned analytics.
Big data will receive a big redemption:
The goal of big data analytics is to create a state of operation known as situational awareness. This is where data is analyzed to the point where actionable decisions can be made rapidly – and in some cases – in real-time. But past failures have lead many enterprises to sour on their big data ambitions. If that’s your opinion today, you may want to reconsider. Especially if your data storage has shifted to the cloud. By offloading back-end duties to a third-party provider, you put yourself in an ideal situation where the business can focus on using data to form accurate decisions regarding the company’s direction. For many, this the tipping point we’ve all been waiting for. The question then becomes, who will take that leap of faith?