The 3 Things You Need to Modernize Ahead of AI Investment
Before jumping into an AI strategy, organizations need to ensure that their architecture can support AI applications, and that means modernizing that underlying architecture.
Repatriation. It’s a word that, just a few years ago, gave rise to amusement and dismissive smiles. Today, it’s a word that makes headlines and is the subject of a growing number of articles explaining why companies are, in fact, repatriating workloads.
I’m not going to postulate as to why organizations decide to repatriate. But I will point out that only those who have invested in modernizing their applications and operations on-premises are really doing so.
I’ve pointed out the link between SRE practices and cloud repatriation before. Our tenth annual State of Application Strategy research will again demonstrate that link remains strong. Organizations that have already or plan to repatriate workloads have two things in common: a mostly modern application portfolio and established SRE practices for operations. In other words, they’ve modernized apps and ops and therefore can repatriate.
What’s really interesting about repatriation is that organizations are not dumping the cloud. Far from it. They are maintaining a healthy presence in the cloud; they are simply being more deliberate about which applications live in their increasingly multi-cloud estate. Either way, the data is clear: Organizations are settling on hybrid as the norm.
Do you know what else is linked to SRE operations? Yes, operating hybrid applications.
The link is startlingly obvious. Of those running hybrid applications -- that is, applications whose components are distributed across multiple cloud properties -- 41% have already adopted SRE practices. A mere 7% operating hybrid applications have not done so. The rest are planning to, and soon.
I told you all that so I can note that AI applications -- applications that incorporate AI, such as chatbots and the very popular AI assistants -- are almost certainly going to be hybrid as well. That means part of an AI app is on premises and another part is in the cloud. Or two parts are in the cloud and one part on-premises. The full impact of AI on applications is just emerging, but one architectural pattern is clear: there will be multiple components combined with LLMs and models to achieve our goals.
The connection between AI and hybrid IT is just as clear in our research. In fact, there's a greater preference for public cloud deployment of AI engines than there is for applications that will use them. The overall picture is clear: AI apps mean more hybrid IT.
Our research shows that 35% of AI engines are deployed in both public cloud and on-premises. For AI apps, that number jumps to 44%.
This is important because as we watch folks rush to adopt AI right now, we recall that cloud, too, promised greater efficiencies and higher productivity. But if organizations hop on the AI bandwagon with the same lack of attention to modernization as they did the public cloud, they may find themselves locked into services or solutions that, in hindsight, will prove to be a mistake.
Modernization is a critical part of every organization’s transformation journey to become a digital business that can harness the power of AI and actually realize its benefits. To fail to modernize aging, often obsolete architectures is to ensure the road ahead is a bumpy one.
We’ve seen multiple instances over the past year of companies faltering under the challenge of modernizing their organization’s infrastructure to handle digital business. Without the proper systems in place, many organizations are likely to find their AI investment is just another high-cost, low-success venture.
That means paying attention to three of the key technical capabilities needed to successfully navigate the digital transformation journey:
Distributedness. Companies need to be adopting multi-cloud strategies for success before they can properly move into the era of AI. Multi-cloud solutions help organizations support the foundations of hybrid applications, which AI is certainly going to produce.
SRE operations. Site reliability engineering is a modern approach to operations that leverages visibility and, automation to move from outdated SLAs to service level objectives (SLOs) that align with business outcomes, rather than IT measures that can leave operations overwhelmed.
Security. This seems obvious, but AI is going to produce an explosion of APIs, which means API-focused security is going to be a top priority for organizations that want to keep their apps and AI safe from predatory attacks. This is true both for north-south and east-west traffic, the latter of which is increasingly not just intra-cluster, but intra-environment, thanks to hybrid architectures.
The AI era is upon us, and it isn’t going away. Those who take the time to modernize before moving ahead aren’t falling behind; they’re setting themselves up for success. Investing in modernization is investing in AI, and that investment will pay off big in the future.
About the Author
You May Also Like