Future Software: Only as Good as the People Who Build It - InformationWeek

InformationWeek is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

IT Leadership // Digital Business
08:00 AM
Charles Babcock
Charles Babcock
Connect Directly

Future Software: Only as Good as the People Who Build It

In a world increasingly dependent on software, we may need to develop the art of falling back to the most recent system that worked.

It was 1986 and as a relatively new resident of New Jersey, I looked over our auto registration renewal that had just come in the mail. Then I did a double take. The form had our name and address listed correctly, but the vehicle to be re-registered was someone else's.

I was a barely wet-behind-the-ears technology journalist but even I could see the implications. The database system sending out registration notices was malfunctioning; at least some vehicle owners were being asked to register someone else's car. With all the information on the form correct, except for the vehicle, it seemed to me it might be a modern, relational database system at work. Only relational database could mix and match information in supposedly foolproof ways. And here was an example of what happened when something in such an application went wrong.

An older, hierarchical or networked system, such as Cullinet's IDMS or IBM's IMS, couldn't retrieve information out of order. If the correct car/owner data went into the system, they'd come back out together. It might be the only thing the system could do but its inflexibility meant it would get that much right.

Indeed, N.J.'s Department of Motor Vehicles was using a brand new system based on a DatacomDB relational system and it was misfiring. A consultant had used Applied Data Research's database and Ideal fourth generation language to build the application. And they had botched the job. It was a regrettable black eye for ADR, which had good technology but wasn't able to prevent it from being misapplied.

ADR was located in Princeton, N.J. and as the New York correspondent for Computerworld, I listened as Martin Goetz, CEO, faced the storm of questions and answered honestly, regardless of the consequences. Later that year, a firm that he had painstakingly built up over 27 years went on the auction block and was acquired by Ameritech, then two years later, by Computer Associates. Its market value had been affected by events, I'm sure, and it had to be a disappointing outcome for Goetz, a true industry pioneer.

At a time when the relational systems with their ad hoc queries weren't fully trusted, becoming the poster child for a malfunctioning database was going to have debilitating consequences. Being caught exposed on the leading edge was a major hazard of the independent software business.

Despite occasional setbacks, the software industry as whole prospered and grew. I was surprised at the severe limits of what computers could do as I discovered them in an introductory course to Fortran. Gradually my own limited point view opened up to where I could see how software was constantly capturing more and more of the reality around us and putting that information to work. Whatever rules and relationships could be captured in code could also be manipulated with variables and processed with the computer's Boolean logic.

Source: Aligned Data Centers
Source: Aligned Data Centers

It took longer to see how the most skilled practitioners kept pushing back the limits of what could be represented, capturing greater and greater complexities. Whenever one set of goals had been achieved, they moved on to the next level of abstraction.

We now create virtual realities through which a person's digital stand-in or avatar can act on behalf of its owner. In 1984 when I started in this business, such a thing was science fiction. Digital assistants learn from our patterns what it is we like to eat, tend to buy or how we get to work. Machine learning creates giant compendiums of data on the constant operation of machines, data that when analyzed can keep engines running and plants open. Cognitive computing can take many different types of sensory data and merge them into something that at least vaguely resembles how humans perceive their world and what to respond to in it.

IBM's Watson has not only an ability to win Jeopardy but also to use the human genome in diagnosing disease. Mendel Rosenblum did what others said couldn't be done and achieved the precise emulation of the x86 instruction set in software, creating the world of virtual machines. Given the world's appetite for search, Google had to learn how to launch a hundred million containers at a time and manage them through Borg and Omega, eventually yielding an offshoot for the rest of us, Kubernetes.

Software is moving the boundaries of human capabilities forward quickly, too quickly sometimes for the average human to keep up with it. Increasingly complex software systems will attempt to manage hundreds of thousands of driverless cars in a single metropolitan area. They will try to govern all the variables involved in transporting humans on a six-month journey to Mars and bring them back.

Somewhere in the midst of all of this there is sure to be another Department-of-Motor-Vehicles screw-up, a setback where the software almost did what it was intended to do, but somehow fell a nanometer short. Its designers hadn't foreseen every eventuality. The cloud had always been highly reliable -- right up until the moment when its own design started to work against it. (See the post mortems on Amazon's 2011 Easter weekend or Azure's 2012 Leap Year Day meltdown.)

As the future unfolds, we will be relying on software more than ever, but let's never forget it's only as good as the humans that build it. In key situations, such as a missile launch or air traffic control, knowing the process of falling back to some less-complicated position after a system failure may become an art form. Our most advanced system has hit a breakpoint; let's revert to the thing that we know works, until we figure out what went wrong.

Whether such a thing is possible may determine how well we will survive and thrive in our digital future.

[Editor's note: After more than 30 years of outstanding work covering the IT community and the software sector, Charlie Babcock will be retiring on Friday. He's taking time this week to reflect, and to put our technology progress into perspective.]

Charles Babcock is an editor-at-large for InformationWeek and author of Management Strategies for the Cloud Revolution, a McGraw-Hill book. He is the former editor-in-chief of Digital News, former software editor of Computerworld and former technology editor of Interactive ... View Full Bio
We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.
Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
Charlie Babcock
Charlie Babcock,
User Rank: Author
8/2/2017 | 1:34:32 PM
Low coding is in your future, but not only low code
AndrewfOP, Low Code is certainly going to play a role. But the really ambitious, leading edge systems will be produced by disciplined and talented teams, and their challenge will be to think of everything that might go wrong before it has a chance to. It remains a big challenge. Does the OP in your handle mean you're in operations?
User Rank: Apprentice
8/2/2017 | 10:45:01 AM
Good People = Good Software?
Although one cannot deny that good software needs to be made by talented programmers, good programmers don't necessarily produce good software, especially software that needs to be maintained by someone else.  Far too often, original software codes lack proper documentation or any apparent logic, which leads to all-consuming effort for a supposedly simple change, and at worst, a complete rework of the original program.

I believe the future of software programming will be in "Low Code", programming platforms that requires relatively simply logics, which allow laymen to make sense what a software package is supposed to do and attempt repair when necessary.  Developing reliable and efficient low code platforms would thusly be the domain where the talented software developer could truly shine and provide continued progress, rather than reinventing the wheels with ever growing sets of programming languages and software platforms.  

InformationWeek Is Getting an Upgrade!

Find out more about our plans to improve the look, functionality, and performance of the InformationWeek site in the coming months.

Remote Work Tops SF, NYC for Most High-Paying Job Openings
Jessica Davis, Senior Editor, Enterprise Apps,  7/20/2021
Blockchain Gets Real Across Industries
Lisa Morgan, Freelance Writer,  7/22/2021
Seeking a Competitive Edge vs. Chasing Savings in the Cloud
Joao-Pierre S. Ruth, Senior Writer,  7/19/2021
White Papers
Register for InformationWeek Newsletters
Current Issue
Monitoring Critical Cloud Workloads Report
In this report, our experts will discuss how to advance your ability to monitor critical workloads as they move about the various cloud platforms in your company.
Flash Poll