We’ve come a long way with autonomous computing. Here’s a look at what’s working in production, and areas that are still considered a work in progress.

Mary E. Shacklett, President of Transworld Data

March 8, 2021

7 Min Read
Image: Egor - stock.adobe.com

IT operations was an early focus area for autonomous computing, artificial intelligence, and automation. Now that we have some history, which automation and AI techniques are working in IT, and which have yet to make an impact?

Here are four areas of automation that work well in production now:

1. Network automation. Automation tools have enabled IT to automate network provisioning along with network health monitoring for performance, maintenance, and diagnostics.

This has eased the load on network administrators and technicians so they can attend to network fine-tuning and proactive network planning while an automated tool monitors network performance, health, and issues alerts when performance glitches or failing network components are detected. The automation facilitates replacement or remediation of network components long before an actual network outage occurs.

If you can avoid downtime, the operational savings for the company are significant, since the average downtime incident now costs companies an estimated $5,600 per minute, according to Gartner.  

2. Security monitoring automation. Almost every company uses some kind of security automation solution to monitor networks, applications, data and potential penetrations of malware or unauthorized access.

With IT infrastructure as complex as it is today, there are numerous points through which a security breach can occur. It’s the job of software to detect these intrusions and to note the who, what, where and when of each intrusion. Ideally, an attempted intrusion can be automatically stopped at the point of entry. If it can’t, at least there is a trail that IT can follow to fight the breach.

Tools for security monitoring are available for networks, applications, data, Internet, and endpoints. Organizations have embraced these tools, knowing that a security breach can cost them an average of over $3.6 million, which Ponemon estimated in 2020.

3. Automation for virtual OS deployment. Companies like SUSE, IBM, VMWare, and others now provide automation that enables IT to automatically deploy virtual operating system instances for purposes of testing applications before apps are moved into production.

By automating OS deployment for application testing, hundreds of hours of expensive DBA and software support staff are saved. Errors are also reduced, since test OS deployment was formerly done “by hand,” with IT staff coding the OS deployment scripts.

OS deployment automation software also can be set to remove virtual OSs after it detects that an OS hasn’t been used for a certain number of days that IT defines (e.g., 60 days). This preserves server space and eliminates waste.

While it’s hard to define the exact amount of staff time saved by using automated OS deployment, most IT shops have been using it for years, and have recognized the benefit.

4. ETL automation. Integration is one of IT’s most painful jobs. Now with so many diverse forms of data coming in from myriad sources, the job has gotten tougher. This is where the automation potential of extract-transform-load (ETL) software pays off.

Many of these ETL packages come with over 200 pre-defined APIs to the most commonly used commercial software tools. This makes the job of importing, transforming and then loading data into different systems simpler. ETL tools also allow IT to define its own business rules for the ETL process.

The net result is that manual development of APIs is dramatically reduced, as is error potential.

While it is hard to put a finger on how much integration work costs IT, software engineer Andrew Park estimates that a single API design could cost upwards of $10,000. If you multiple that by 200 APIs, this becomes a significant expense.

Here’s a look at automation areas that have all shown promise but have yet to reach their full value potentials:

Automatic failover for disaster recovery. It’s been over 10 years now since major computer providers have offered automatically triggered disaster recovery failover for the mainframes and servers that companies operate in their data centers.

The value proposition has always been clear: The system, once you put in your business rules for triggering failover, can do DR by itself in a ‘lights out” data center. This saves time to response and can initiate the DR process at any time of day to ensure uninterrupted service.

But as the chief information officer of a large payment processor in the EU told me, “It was a tantalizing concept -- but at the end of the day, I would be the one in front of the board and the stakeholders. It was alright for the system to alert us that we were in danger of failing -- but I had to be the one to push the button.”

This feeling is almost universal among CIOs.

DevOps automation. The value proposition of DevOps automation is that it can automate all points of the DevOps life cycle (e.g., setup and configuration, code generation, testing, deployment, monitoring, etc.) This saves IT staff time and avails the methodology to non-IT and/or para-IT developers with limited IT backgrounds.

In this sense, the DevOps automation value proposition piggybacks on the value proposition offered by its predecessors (e.g., third- and fourth-generation languages and report generators). The beauty of the automation is that you don't have to cut code or know much about underlying IT infrastructure.

Here’s the catch: If you need to fine-tune applications for performance or insert specialized code for your particular IT operation, you still need to tweak.

DevOps can and does generate batch and online applications, and it can do it well. But if you have mission-critical processing needs, such as exceptionally rapid throughput of transactions for a hotel or airline reservation system, DevOps automation, with its excess code generation, will not give you that performance.

Training automation. The Covid-19 pandemic has seen surges in online IT training, which staff can take on their own. The training can be great -- but not if IT thinks it can automate away all its training responsibilities.

Consider this: A junior database analyst takes an online class in a major database software that the company uses. The goal is to make the junior person proficient in database design, development, and deployment so some of the DBA’s more tedious duties can be assumed.

The junior person successfully completes the classes and is ready to use new skills, but the busy DBA doesn’t have the time to assign a project. Over time, the junior person begins to lose the skills learned because there are no opportunities to apply them at work. The company also loses its training investment.

Batch “tweaking” automation for the “lights-out” data center. Today, there are many data centers that run in a lights-out fashion at night. These data centers use software that automatically processes a sequence of “batch run” jobs that maintain systems and produce reports and system refreshes before employees arrive in the morning.

The catch to batch sequencing automation has been when IT has had to reorganize or fine-tune these batch runs, especially in mainframe shops that use batch run languages such as JCL.

It’s a tedious job to tweak and revise batch run sequences, so automation of these tasks is valuable because it can lighten IT’s load. Undoubtedly, these tools will improve over time. But for now, IT still finds itself in a role where it must “hand tweak” batch run sequences, since there are many contingencies and relationships between jobs and data that must be considered, and that are unique to every IT shop.

 

Follow up with these articles on automation:

Automation Revs Pandemic IT Toolbox

How to Get Automation Right

We Will Need All the Automation We Can Get

 

About the Author(s)

Mary E. Shacklett

President of Transworld Data

Mary E. Shacklett is an internationally recognized technology commentator and President of Transworld Data, a marketing and technology services firm. Prior to founding her own company, she was Vice President of Product Research and Software Development for Summit Information Systems, a computer software company; and Vice President of Strategic Planning and Technology at FSI International, a multinational manufacturer in the semiconductor industry.

Mary has business experience in Europe, Japan, and the Pacific Rim. She has a BS degree from the University of Wisconsin and an MA from the University of Southern California, where she taught for several years. She is listed in Who's Who Worldwide and in Who's Who in the Computer Industry.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights