Making the Transition to Artificial Intelligence
AI will challenge IT infrastructure, governance, security, and IT. As companies transition to AI, what should CIOs be doing to prepare their organizations?
The Health Management Academy has defined five levels of AI maturity in healthcare:
Level 1 - AI Awareness
Level 2 - AI Experimentation
Level 3 - AI in production
Level 4 - AI is pervasively and systemically used
Level 5 - AI is transformative and has become part of an organization’s DNA.
When academy members asked medical and technology practitioners where they felt their organizations were in the AI maturity curve, most selected Level 2 - AI experimentation.
I’ve found similar results in other industry sectors.
It’s also clear that it will be up to CIOs to shoulder major responsibilities for AI deployment, whether the AI comes into the company through a dedicated data science department, or through a user function. Why? Because only IT has a comprehensive, enterprise-wide knowledge of data, applications and infrastructure, and how AI could affect them.
Since most organizations are still in AI tire-kicking stages, and really don’t know what the ultimate outcomes of AI will be, it falls on CIOs to think though the challenges and opportunities of deployment as AI becomes an organizational reality.
The Best Way to Transition a Company to AI
Here are six things to include in your plans:
1. Evaluate your existing data and IT architectures.
AI functions best when the data it operates on exists in a data repository, and the data in the repository is of high quality and can interoperate with other data types. It takes time to achieve and maintain high quality, interoperable data. The process begins with using ETL (extract-transform-load) tools that can pull data from a wide range of on- and off-premises sources, clean it, and standardize it so it can interoperate with the types of data in the single data repository that AI will use.
This process impacts IT infrastructure because it involves a number of systems that may not be well integrated with each other. Systems outside of the enterprise that are owned and operated by third parties must be vetted for data compatibility, and also for conformance with security and governance standards.
On the network side, bandwidth may need to be increased, and network traffic and deployment patterns may need to be reworked. In storage, more storage certainly will be needed, and data backup and recovery schemes will need be established for AI.
In processing, adjustments must be made for parallel stream processing that is a departure from the linear processing used for everyday IT transactions.
This is a lot of information to unpack. It will affect how IT operates, and it will demand new data management, processing and strategic skills that IT may not have.
2. Evaluate your skill levels.
Most IT staff members will require up-skilling for AI.
Staffers may need to learn new programming languages, and technical IT support in the data center will have to master a parallel processing environment. The networking staff must provision additional bandwidth and faster throughput for AI, and most likely will develop a dedicated network.
Applications and business analyst groups will need to learn the mechanics of AI application building. This begins with defining algorithms, building learning models for machine learning, and progressing to iterative QA testing until the AI outcomes are within 95% accuracy of what subject matter experts would conclude. On the user side, subject matter experts need to be recruited for assistance in developing algorithms.
3. Set compliance and governance guidelines.
Companies, regulators and governments are just beginning to get their arms around AI compliance and governance. So, it is falling to companies to define their own guidelines.
As AI matures, there will be incidents and use cases that dictate regulation, and these regulations will be written. in the meantime, the goal for IT is to avoid being in one of these incidents or use cases.
4. Evaluate user acceptance.
Employee resistance is a major cause of project failure, and employees will be resistant if they believe that AI will take their jobs away. The solution is to develop a human roadmap so that individuals know upfront where they and their responsibilities are likely to evolve. In the event that jobs may be eliminated, it’s best to let employees know this upfront, and to assist them in finding other work.
5. Evaluate risk.
In state governments, the No. 1 priority for CIOs is cybersecurity and risk management, and they are not alone.
AI is a major security risk because IT security solutions are designed for standard, transactional IT, not big data.
One growing AI security threat is “poisoned data in which trawled data for deep-learning training is compromised with intentional malicious information.” The data gets compromised, and the results that the AI derives from it are purposefully false and misleading.
A second risk is the degradation of AI results over time. This happens when business and other conditions change, but the algorithms querying the data or the data itself don’t keep up with the rate of change. IT and end users must develop maintenance strategies for AI systems that continuously monitor for accuracy and flag declining accuracy levels so IT and end users can make the necessary adjustments to regain accuracy.
6. Stop being experimental.
Most AI systems are still in pilot stages, but now is the time to pursue the right methodologies, technology deployments and staff skill upgrades before AI moves to production.
AI in production (and ultimately in company DNA) is a certainty. It’s not too soon to start rethinking and redeploying IT operations, methods and skillsets to be ready for it.
About the Author
You May Also Like
2024 InformationWeek US IT Salary Report
Aug 15, 20242024 InformationWeek US IT Salary Report
May 29, 20242022 State of ITOps and SecOps
Jun 21, 2022