Dealing with Multi-Cloud Data Complexity
IT leaders need to understand that a problem exists and then think through approaches. Once they do that, the technology to employ is easier to figure out. Here's some guidance.
The current patterns of cloud migration include simple “lift and shift,” which moves data with as little work as possible, typically by refactoring or redoing the applications and data so they work more efficiently on a cloud-based platform. More and more migrations include multi-cloud, which contributes to the appearance of new data complexity issues. When leveraging multi-cloud architectures, it’s important for IT leaders and cloud professionals to rethink how to deal with data complexity.
The reasons for the rising data complexity issues are fairly well known and include the following:
The rising use of unstructured data that doesn’t have native schemas. Schemas are typically defined at access.
The rising use of streaming data that many businesses employ to gather information as it happens and then process it in flight.
The rise of IoT devices that spin off massive amounts of data.
The changing nature of transactional databases, moving to NoSQL and other non-relational models.
The continued practice of binding single-purpose databases to applications.
Finally, and most importantly, the rise of as-a-service cloud-based and cloud-only databases, such as those now offered by all major cloud providers that are emerging as the preferred databases for applications built both inside and outside of the public clouds. Moreover, the use of heterogeneous distributed databases within multi-cloud architectures are preferred.
Challenge of multi-cloud
For the most part, those who build today’s data systems just try to keep up rather than get ahead of data complexity issues. The migration of data to net-new systems in multi-clouds is more about tossing money and database technology at the problem than solving it. Missing is core thinking about how data complexity should be managed, along with data governance and data security. We’re clearly missing the use of new approaches and helpful enabling technology within multi-cloud deployments that will remove the core drawbacks of data complexity.
The core issue is to move toward application architectures that decouple the database from the applications, or even move toward collections of services, so you can deal with the data at another layer of abstraction. The use of abstraction is not new, but we haven’t had the required capabilities until the last few years. These capabilities include master data management (MDM), data service enablement, and the ability to deal with the physical databases using a configuration mechanism that can place volatility and complexity into a single domain.
Virtual databases are a feature of database middleware services that technology suppliers provide. They serve to drive a configurable structure and management layer over existing physical databases, if that layer is in the requirements. This means that you can alter the way the databases are accessed. You can create common access mechanisms that are changeable within the middleware and do not require risky and expensive changes to the underlying physical database.
Moving up the stack, we have data orchestration and data management. These layers provide enterprise data management with the ability to provide services such as MDM, recovery, access management, performance management, etc., as core services that exist on top of the physical or virtual databases, in the cloud or local.
Moving up to the next layer, we have the externalization and management of core data services or microservices. These are managed, governed, and secured under common governance and security layers that can track, provision, control, and provide access to any number of requesting applications or users.
Act now
Most enterprises are ignoring the rapid increase of data, as well as that of data complexity. Many hope that something magical will happen that will solve the problem for them, such as standards. The rapid rise in the use of multi-cloud means that your data complexity issues will be multiplied by the number of public cloud providers that end up being part of your multi-cloud. So, we’ll see complexity evolve from a core concern into a major hindrance to making multi-cloud deployment work effectively for the business.
What’s needed now is to understand that a problem exists, and then think through potential solutions and approaches. Once you do that, the technology to employ is rather easy to figure out.
Don’t make the mistake of tossing tools at the problem. Tools alone won’t be able to deal with the core issues of complexity. Considering the discussion above, you can accomplish this in two steps. First, define a logical data access layer that can leverage any type of back-end database storage system. Second, define metadata management with the system use of both security and governance.
The solution occurs at the conceptual level, not with the introduction of another complex array of technology on top of already complex arrays of technology. It’s time to realize that we’re already in a hole. Stop digging.
{Image 2}
David Linthicum is Chief Cloud Strategy Officer at Deloitte Consulting LLP. He is responsible for building innovative technologies that help clients operate more efficiently while delivering strategies that enable them to disrupt their markets. Linthicum is widely respected as a visionary in cloud computing -- he was recently named the No. 1 cloud influencer in a report by Apollo Research and is the author of more than 13 books and 5,000 articles. For more than 20 years, he has inspired corporations and start-ups to innovate and use resources more productively.
About the Author
You May Also Like