Re: IT priorities
The answer to your question is not simple but here goes:
1) Being open and upgradable is only true briefly. Open source projects get abandoned as they become stale or just 'no longer cool'. We see open source are upgradeable because we are looking over a planning horizon of years not decades.
2) Open source is not always more maintainable; do we go for a clean, will constructed commercial stack of a loosely associated constellation of applications which happen to work together? We need to be very careful over which open source we go for. In a highly regulated environment, 'Any old Linux' is not going to cut it for example. A bank, or even a large retail chain, will not have the resources inhouse to manage an entire Linux stack and so has little choice but to buy in Redhat or one of the similar third party systems. Otherwise, the user of the stack cannot guarantee to the regulator due diligence over the stack they are using.
3) Now for the kicker, the mainframe. The problem here is that the open source, closed source or any other world has not yet provided a distributed computing alternative to large mainframe installations. The transnational consistency model available with System Z cannot be replicated using commodity hardware and modern networking technology in any meaningful way. The issue is throughput vs latency. You can code up a x86_64 server system to manage huge throughput with a consistent eventually model (think Google) or with a instantly consistent low latency model (think high frequency trading) but both at once is simply impossible due to all sorts of issues such as the Von Neuman bottle neck (i.e. main memory speeds), processor design, network latency, power constraints and hardware reliability.
So, if you have a 'classic' System Z based infrastructure the transactional consistency it provides is likely to be baked into the way the business works. It will be baked into the business's relationship with the regulators. It cannot be 'modernised' because there is nothing to modernise it to. It could be replaced, but that is not just replacing it but replacing an entire way of functioning for the business; this can amount to a complete pivot of a company's business model. Will the new business be competitive? Will it be hit by huge regulator fines? Will the disruption in the pivot cause irreparable damage to the business?
I do see a world, 15 ot even 26 years hence, where silicon to silicon fibre and fpga baked into network cards can start to allow x86_64 (or similar) based hardware to lift and shift away from System Z; but that is firmly in the future and still in doubt. Until such time, maintenance of classic systems will continue to be necessary and achieving small, incremental shifts away from those systems is all which even the most ambitious CIO should be expecting. I look back 15 years so see that distributed computing has scaled out enormously but has done little to tackle the transactional consistency vertical scaling issue; hence my prediction that we need at least another 15 years for that to happen.
Maybe, just maybe, it will never happen. We might still be using classic technology in 50 years time just as we are still driving around Otto Cycle engines which were invented over a century ago and have never been replaced by more modern systems.