Successful cloud implementations are driven by speed, flexibility and fine-grained resource management. Focus on these drivers, and the savings will accumulate.
"Let's move to the cloud so we can save money.”It's a remark we often hear from the C-suite. They’ve read that companies of all sizes have cut IT infrastructure costs by moving compute power and data storage to public cloud platforms like Amazon Web Services (AWS), Microsoft Azure, and the Google Cloud Platform. What they often don’t know is how these companies actually used cloud platforms to save money.
Too often, IT teams dutifully estimate the costs of replicating everything they have in their data centers on cloud platforms, an approach called “lift-and-shift” that is associated with infrastructure-as-a-service (IaaS). Usually, they conclude that the cloud is more expensive. What gives?
Through our personal and client experiences, we’ve found that companies that successfully move to cloud-based platforms do so not by focusing on overall infrastructure cost, but by focusing on three more important drivers: speed, flexibility, and cost transparency, in that order. Companies that focus on these drivers end up making their IT investments much more efficient — and cost effective — either through reduced spend in compute resources and personnel costs, or by freeing up resources and personnel to work on a larger number of business initiatives.
Speed By far the biggest benefit of moving analytic and computing infrastructure to cloud-based services is increased speed. Not speed in terms of application performance, but speed in terms of the time between a decision to provision and executing code.
Most companies have a lengthy procurement process for obtaining hardware and software. Even with existing vendor relationships, obtaining physical hardware can take days or weeks, and then it takes at least several more days to be configured and integrated into a datacenter.
With cloud services, once a cloud platform has been securely connected to your networks, provisioning literally involves a few clicks. While recently testing some architectural proposals, I provisioned a 60-node massively parallel processing SQL database cluster within minutes. This can be done with Azure SQL Data Warehouse, Hadoop clusters, Amazon Redshift database clusters, SMP databases, data stores, individual virtual machines, almost anything included in a modern data center. Your developers no longer need to waste time between concept and execution. The idea you conceive of today can be implemented tomorrow, or even this afternoon.
Flexibility What if you integrate a cloud-based 10-node MPP SQL cluster into your application architecture today and tomorrow you discover that you’d be better off with 20 nodes? Within a few minutes you can change the cluster size. If you had purchased that cluster for your data center, you’d need to restart the provisioning process and wait for another 10 nodes to arrive. Worse, what if you started with 20 nodes and decide later you really only needed 10 nodes? With physical hardware, you’ve just over-purchased. Using cloud-based services, however, you can simply get rid of the unneeded nodes, right-sizing your computing assets for exactly what you need at any point in time.
In addition, cloud services often provide more flexibility in networking together and piping data between services. Need to connect the data in a Hadoop cluster to an analytical database and output results to a BI tool? Point and click.
Fine-Grained Resource Management The flexibility of cloud-based platforms provides another benefit: fine-grained resource management. Compute power can be turned up and down depending upon the needs of the day. Have a data warehouse process that takes intensive processing power for eight hours per week? Turn up the cluster’s power for those eight hours, and turn them back down for normal operations. Have analytical uses that concentrate during business hours? Provision more resources for those hours and turn it down at night, controlling budget dollars.
Another benefit we’ve seen from using cloud-based services is cost transparency. Because cloud compute and storage services can be allocated on an application and/or function basis, the visibility the dollars being spent against specific applications is possible in ways that make the costs of initiatives more transparent than ever before.
As an example, we had a client implement a new analytical customer data warehouse entirely in the cloud (having on-premise systems regularly deliver data to the cloud systems). The total cost to run the new system was less than $300k per year, and the entire application was developed and brought into production in under six months. Similar on-premise systems typically cost millions of dollars in capital expense and far more than a year to provision, install, architect, and implement.
Speed, flexibility, and fine-grained resource management are what’s really driving today’s moves to the cloud. Focus on these factors when implementing new systems or refactoring existing systems, and you’ll not only be able to take more advantage of your customer data in less time, you’ll also deliver the savings in dollars and resources your executive team is looking for.
Mark Gonzales is Senior Director of Customer Technology for Elicit.
The InformationWeek community brings together IT practitioners and industry experts with IT advice, education, and opinions. We strive to highlight technology executives and subject matter experts and use their knowledge and experiences to help our audience of IT ... View Full Bio
We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
2018 State of the CloudCloud adoption is growing, but how are organizations taking advantage of it? Interop ITX and InformationWeek surveyed technology decision-makers to find out, read this report to discover what they had to say!