Understanding Fact from Fiction When Moving Legacy to Cloud

Here are three truths and one lie leaders should know before moving to the cloud.

Guest Commentary, Guest Commentary

January 6, 2020

5 Min Read
Image: Gorodenkoff - stockadobe.com

While they might power mission critical aspects of a business, traditional data centers and legacy-based monolithic applications are often brittle, old, complex and tightly integrated. Few can argue with the business and technical benefits associated with shifting these workloads from traditional data centers to cloud-based infrastructures to remain agile and competitive in today’s landscape. 

But even though the benefits of the end state destination might be obvious, the act of moving to a cloud environment can be a very tricky process. For CIOs, CTOs, and IT leaders, one of the most important steps in the process is having a firm understanding of what is true and what is fiction when it comes to this kind of transition to ensure that it is done correctly.

Here are several truths and one lie leaders should know before moving to the cloud.   

Truth: Not all workloads should become cloud-native microservices

Cloud-native might be a logical design goal of newly developed cloud workloads, but in some cases, there is no need to distill complete legacy functionality down to an independent set of loosely coupled microservices. Sometimes, the complexities involved in architecting, managing, scaling and monitoring highly transactional atomic-based workloads make them better suited for monolithic application. Since they are also updated less frequently, they don’t need continuous delivery models supported by their own team.

Each strategy for monolithic will depend on the key business needs and may require methods like re-hosting “as-is” with little-to-no change, or refactoring to Java or C# to further optimize for certain cloud capabilities such as increased elasticity and availability. The key here is not to rush down one path. Rather, companies should leverage a tailored approach coupled with roadmaps that outline specific goals for different applications based on individual requirements. Deciding which capabilities to decouple, and then moving toward a cloud-native microservices architecture from there will prove much more productive and effective.

Truth: You can minimize risk through a combined top down and bottom up assessment

When embarking on a transformational journey, all lines of business and various stakeholders must be aligned throughout the entire process, and key parties should not be siloed from each other. This requires ongoing conversations about the shift at the onset, and proactively addressing the cultural and operational changes associated that will impact different teams. A top down and bottom up analysis should then be executed in tandem, a combination that has proven to reduce scope by 40-70% if done correctly.

The top down analysis can be through workshops like event storming and domain driven design (DDD) that allow the future to be shaped by describing the business and how events flow through the system. Legacy functionality usage has most likely evolved over the years, so incorporating UX to build specific service use cases is also critical.

A bottom up analysis offers a comprehensive picture of the contents and interrelationships between application components. This kind of insight can significantly reduce the cost and complexity of any future effort by isolating unused components, highlighting potential roadblocks, and focusing on areas in need of concentration. This also exposes the legacy application design and anatomy of the source code, which is key to eliminating any design weaknesses so that the future state architecture doesn’t inherit them.

Truth: The move to cloud is best accomplished as an incremental journey

Through the top down and bottom up assessments, organizations can also easily create strategic roadmaps that outline ways to drive ROI at each incremental step. It’s important here to consider the different levels of maturity (i.e. cloud ready, cloud optimized and cloud native) to determine the best approach. Although cloud-native is often the end goal for monolithic applications, a logical first step is to convert code into cloud native languages like Java or C#. By doing so, an organization can then eliminate the dependency of the mainframe and target a cloud ready containerized environment.

For cloud-optimized environments, workloads are optimized further to provide scalability at the container level, while replacing a few high-value capabilities with new microservices functionality. This can be further optimized at an organization’s own pace to move incrementally toward that cloud native environment, though some organizations might never actually transform the entire monolith.

Lie: There’s no reason to worry about operations and infrastructure

Around 40% of the move from legacy to cloud is typically focused on application source code and data conversion, with 40-50% spent on testing, and 10-20% geared towards design, implementation and management of target operations and hardware infrastructure. Since the legacy environment has well-established operational and infrastructure standards and processes already rooted in place, a target platform should never be an afterthought. As part of this, companies must ensure that they are first operationally ready and have the skilled resources needed to support the new, continuous delivery processes. By building out the target environment as part of the incremental journey, teams will have more time to adjust before any microservices-driven projects start.

An astounding 80% of global corporate data today lives in or comes from mainframes that leverage technology that is often 50 years old. While the move to the cloud has given organizations a way to shed the growing burden of managing these systems and a new approach to remaining innovative, agile, flexible and competitive, many have only just scratched the surface in embarking on and effectively completing their journeys. Since once of the biggest hurdles is developing and implementing the right approach, it’s up to IT and technology leaders to take the time to tailor their strategies and ensure that they will actually drive the right business results now and for years to come.


As Executive Vice President of Modern Systems -- an Advanced company, Cameron Jenkins oversees sales, marketing, technology products and solutions on a global level. Prior to joining Modern Systems, he served as executive director & global practice lead for the Application Modernization division of Dell Services.  

About the Author(s)

Guest Commentary

Guest Commentary

The InformationWeek community brings together IT practitioners and industry experts with IT advice, education, and opinions. We strive to highlight technology executives and subject matter experts and use their knowledge and experiences to help our audience of IT professionals in a meaningful way. We publish Guest Commentaries from IT practitioners, industry analysts, technology evangelists, and researchers in the field. We are focusing on four main topics: cloud computing; DevOps; data and analytics; and IT leadership and career development. We aim to offer objective, practical advice to our audience on those topics from people who have deep experience in these topics and know the ropes. Guest Commentaries must be vendor neutral. We don't publish articles that promote the writer's company or product.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like

More Insights