In an era of constant corporate change and innovation, IT leaders are faced with the challenge of knowing how even the slightest change could have a big negative impact.
Should IT teams take the initiative on innovation? Or is an organization better served by a conservative strategy? The two approaches are in some ways polar opposites. Throughout a long career of working with IT people at some of the world’s largest organizations, I’ve come across executives and organizations at the extremes of adopting new technology and “staying put,” as well as at any given point along the technology adoption curve.
So, which of these profiles make for a better IT organization? While the innovators want to plug into the latest and greatest, I have seen some very large organizations get themselves into big trouble as they adopt new technologies. The easiest thing to do is to blame the technology, but in my experience in most cases the issue is with how ready the organization was for the new technology. It's not a skills gap; it's a knowledge gap, and new tech can easily throw IT managers and organizations for a loop. I've seen it happen a hundred times.
You can’t really blame the innovators for trying. The swan song of “new and improved” is a very powerful one. Sometimes the call to upgrade or install new and improved systems comes from necessity, such as complying with government regulations, adopting a new system due to a corporate merger, or adapting to new trading partners. While there may be good reasons for implementing new systems, the process of integrating them with existing infrastructure isn’t always smooth. To pull it off successfully, you need knowledge and experience. Do IT managers have enough of both, or either?
Adopting new tech is a bit like investing; no pain, no gain. IT managers who make incremental changes in their environments probably will be able to realize small increases in productivity or gain other benefits. But are those small gains worth the risk? Large IT systems are very vulnerable to the “butterfly effect.” Any small change could have far-reaching effects, many of which won’t even be clear until they happen. In a large IT environment where changes are introduced on a daily basis, the risks accumulate quickly. When at some point those changes negatively affect operations, IT teams scramble to figure out what went wrong. Do they have the knowledge to weather the storm? If they're not sure, they may need to think twice.
In one recent case, for example, I found myself working with a Fortune 100 financial services company that was looking to complete its virtualization, finally moving their most demanding applications to their VMware private cloud. But there was a problem; while they wanted to take advantage of the high availability in their private cloud, they realized they would have an issue with cross-site resiliency (a feature available to them in their legacy systems). To solve this, they decided to standardize on an active-active clustering scheme (vMSC with an active-active storage from vendors such as EMC [VPLEX], HDS [GAD], IBM and NetApp).
On paper, it looked like this would provide them with what they wanted, but things did not go as smoothly as they had hoped. From the start, the system was plagued with performance issues, including instances of radical performance degradations, caused by incorrect settings of path-selection algorithms and multi-pathing tools. The system also was unstable; it was later found that the instability was caused by an incorrect storage Witness configuration, and incorrect PDL settings in vMSC. Still more problems cropped up when the team tried to marry both metro-HA with active-active storage, and in attempts to implement long-distance replication.
This was the situation when my team got involved. The biggest issue we saw was that the IT team was “lost in the weeds” and did not have a system-wide view that would allow them to adequately consider interdependencies and how they would impact overall stability of their environment.
Should the team have known what it was getting itself into when it began implementing the changes? Could it have? The reality is that you can't blame IT teams for being unable to figure out the intricacies of interconnections in modern IT systems. It's impossible to know everything, which is why companies call in groups like ours who have experience in dealing with such problems.
So back to the starting question.
My experience shows that trying to stop the wheels of progress is a futile exercise. Sooner or later, the organization will have to step up to new demands and incorporate new technologies. Even if they choose to hold onto their existing technologies as long as possible, there is a constant stream of patches and updates that introduce new risks.
IT teams need to realize that change is inevitable and they need to be better prepared to manage it. One way to do that is to develop and implement a process that can give an early warning on what impact the introduction of new applications, systems, and technologies will have on a computing environment. Implementing that process -- which could entail a training course to learn the intricacies of a new technology, or bringing in an outside team to review an IT environment -- could save everyone involved a lot of heartache, frustration, time, and of course money.
The InformationWeek community brings together IT practitioners and industry experts with IT advice, education, and opinions. We strive to highlight technology executives and subject matter experts and use their knowledge and experiences to help our audience of IT ... View Full Bio
We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Cybersecurity Strategies for the Digital EraAt its core, digital business relies on strong security practices. In addition, leveraging security intelligence and integrating security with operations and developer teams can help organizations push the boundaries of innovation.