CIOs and their teams are routinely maintaining costly, long-running legacy systems that have gathered a large amount of data about purchases, services, and customer behavior over the years. Sure, they’re getting a bit creaky in old age; but the most important problem has existed since they came online -- each one was built in a vacuum and effectively siloed away from the others.
IT teams are feeling the strain of maintaining legacy systems at a time when customers are demanding a more tailored, robust, and streamlined experience from their organizations. Today, most customers want insights and capabilities to be accessed quickly and easily, whenever and wherever they need them, while receiving real-time updates and information through multiple channels (for example, web portals, mobile applications, e-mail, SMS). Because various older systems were one-off development projects that didn’t consider future dependencies or integrations, these legacy infrastructures simply aren’t prepared to meet the demands of the 24/7 digital economy without a lot of time and money being invested.
As a result, most organizations with legacy systems are losing revenue that their current offerings can’t drive or, even worse, are watching it get directed towards their competitors. New experiences that provide valuable insights and new capabilities are the best way for an organization to honor customer expectations and remain relevant in most market spaces. But here’s the problem: The data and logic necessary for these new applications is broken up across their legacy platforms, making it very difficult to access the full spectrum of information or the full range of capabilities needed to invent something new and valuable.
Fortunately, IT teams can salvage the value of legacy systems while pivoting to a new foundation. Core back-end scalable architecture supports quick and cost-effective new development without creating unnecessary technical debt. To enable those sought-after next-generation experiences, it takes a lean, thoughtful, and flexible service-oriented architecture that brokers access to both new and legacy information and services. Building this takes extra planning, talent, and effort that many IT teams aren’t ready to commit to, but here are a few key steps to make the process easier and ultimately more successful in the end.
Devise a strategy that allows legacy systems to participate in the new world. While it’s easy to imagine a new greenfield approach that will free your teams from legacy woes, it’s not realistic. Any solution needs to include an integration strategy for legacy systems that keeps them in the mix in the short term while eventually updating or replacing them when the time is right. Try to boil the ocean and you’ll get burned.
Ask the hard questions up front. IT teams must examine their current infrastructure and data assets to identify what needs to be changed to meet their objectives, then define the right technical architecture to meet the needs of the business. While it is tempting to focus on the immediate projects at hand, IT teams must consider the long-term initiatives that will follow in the months and years ahead. Systems, hardware, data, and personnel resources all need to be factored into the plan. To do so, they must ask themselves these tough questions:
Once you have a better grasp of where you’re at, examine where you need to be from an overall enterprise architecture standpoint. Consider a services-based approach that will drive business outcomes over several years. Here are some of the most important elements to consider:
Microservices. A microservices architecture provides focused and independently deployable application components that fulfill three key objectives: development agility; deployment flexibility; and precise scalability. The highly granular, purpose-built nature of a microservice also facilitates a progressive migration strategy by adding to or replacing legacy components in smaller, more manageable pieces.
Big data strategy. “Big data” is so routine now that it’s better to just call this your “Data strategy.” New technology initiatives will usher in new data dependencies that you may not be accustomed to handling in your organization. As that data is processed and stored, you’ll need a consistent and rapid way to interact with it that will scale as your organization grows and the demands on your data architecture continue to evolve.
Security. Factor security into your plans from the start. You’ll want a straightforward security model that lays across infrastructure – covering data, service, and front-end tiers. Ensure that you have full customization of roles and permissions to account for the various types of users who will be working with your applications now and in the future.
Cloud support. This is also a good time to consider infrastructure flexibility. If you don’t have a cloud strategy, take a hard look at why not. Cloud providers, such as Microsoft Azure or Amazon Web Services, can provide instant geographical distribution, high availability configurations, elastic scaling, and several other useful services beyond the basics, for less money.
Production ready. Design your infrastructure with a focus on centralized monitoring, auditing, and continuous release. Your fancy new application ecosystem is worthless if you can’t manage it. You need to easily diagnose issues as they arise, and ideally have enough monitoring in place to recognize things before they turn into issues. Your system should tie notifications to workflows to prevent troubleshooting downtime and ensure that the right people are receiving the right information at the right time.
DevOps. You need deliver new functionality and fixes early and often in a reliable way, and for that you will need DevOps. Quality analysis, automated tests, packaging, and deployment should be a well-oiled machine with high levels of automation.
Lastly, be certain that your strategy achieves short- and long-term business objectives from the start, rather than doing it piecemeal. You’ll save both time and money by avoiding costly adjustments along with way. You’ll also preserve the sanity of your IT teams and development partners in the process. When you can show immediate ROI on your first initiative, you’ll develop a tail-wind of support from the business that will allow you to realize the long-term roadmap.
Justin Bingham oversees the Technology and Creative divisions at Janeiro Digital, where he pioneered the RADD methodology. He is also responsible for vision and strategy around Janeiro’s own technology initiatives, including the XFORM platform.