Electronic vaulting promises an end to shipping backup tapes, The Advisory Council says. Also, vendor software generally improves over time, but homegrown software needs to beware those "one-at-a-time" changes.
Editor's Note: Welcome to SmartAdvice, a weekly column by The Advisory Council (TAC), an advisory service firm. The feature answers two questions of core interest to you, ranging from leadership advice to enterprise strategies to how to deal with vendors. Submit questions directly to firstname.lastname@example.org
Question A: How can we eliminate shipping (and possibly losing) backup tapes?
Our advice: Disaster recovery and business-continuity planning are essential to a prudently managed business. Disasters can be major, such as fires, floods, tornados, and earthquakes, which can destroy a whole data center, or minor, such as the loss of a single important file. A range of backup techniques can be employed depending upon business requirements.
Classification Of Data And Backup
Backup and recovery investment decisions should be made on two main criteria:
Recovery Time Objective: Elapsed time from the occurrence of the disaster until business operations are restored
Recovery Point Objective: Point in time before the disaster to which data will be restored
Data is classified based on these objectives, which are derived from business needs. A high-availability application (e.g., Web-order processing, or an automatic teller machine network) may demand immediate failure recovery, which requires redundant systems. A document-management system, in contrast, may require a different kind of disaster-recovery scenario.
In addition, there may be legal retention requirements for long-term data storage in electronic, optical, or paper form. Storage companies offer electronic record retention in secure locations for this purpose, and with guarantees to deliver media when required. A delivery truck from storage companies typically makes a trip to a particular area for pick-up and drop-off of tapes.
The transportation of tapes using delivery trucks, and the loss of such tapes, has recently been embarrassing news for some companies. These companies also may face liability issues, especially if personal data from these tapes ends up in the wrong hands.
Several storage-application service providers offer an alternative to traditional backups -- secure backup over the Internet or dedicated circuits. This type of service is referred to as electronic vaulting. The services may use intermediary storage devices at the customer site, from which the service electronically transports the data to a secure site using hierarchical storage management, which keeps the least-accessed data in the lowest-cost storage.
These services can achieve a faster recovery time, since there's no time lag involved in physically retrieving and loading backup media. The data is transported to the recovery facility over the network. Depending upon the software used and backup frequency, these services can also achieve a more timely recovery, although similar improvements can be obtained using traditional backup systems.
An electronic-vaulting vendor also can help in assessing the current backup and disaster-recovery procedures. Vendor selection criteria should include:
Functionality and support for backup and restore
Data security during transport and storage
As more vendors offer electronic vaulting, the price for large backups has become cost-effective compared with the cost for traditional backup. Businesses should review their current backup procedures, including previously unexplored exposures of liabilities and loss during transportation. A comprehensive cost of ownership comparison, including liability and loss exposures, will help you make a sound backup system decision.
-- Humayun Beg
Question B: Vendor software products seem to improve in quality over time. Our applications seem to degrade. What are they doing differently?
Our advice: Software companies that develop products for sale take a different approach to "maintenance" than we tend to see in end-user development organizations.
A Software Company's Product Life Cycle
A software product is commonly considered to have five phases in its life -- pre-introduction, introduction, growth, maturity, and decline.
If one looks at product quality, what's typically going on is a steady improvement. Ironically, the product usually reaches superb quality just about the time new product sales start to decline.
Why does quality keep improving? Almost without exception, software companies enhance their products through a series of releases. In each release, bug fixes to the last release, performance tuning, compliance with new standards, and feature additions are grouped -- and the development of the group treated with the same methodological rigor that went into release 1.0 of the product. The rigor explicitly includes systemwide testing, noting the impact not just of each code change, but of all the code changes together. (Pressure from the user community for fast fixes for bugs is handled by distributing temporary patches.) Using this rigorous approach, it isn't unusual to see complex software products live for two or three decades, all the while staying current with changing business requirements and with technological advances. (Try to remember the last time you reported a bug in z/OS or OpenVMS.)
In contrast, it isn't unusual to see in-house applications that have to be abandoned in less than a decade because "no one dares open them up."
An In-House Application's Life Cycle
The figure shown above for a software product life-cycle could be used for in-house application by simply re-labeling the vertical axis as quality instead of sales. Many in-house applications are constructed with admirable methodological rigor in the pre-introduction phase. Introduction is carefully managed, and early bugs caught and fixed, improving application quality. There may even be a continuation of systemwide regression testing in this early stage. Shortly after introduction, however, there's a tendency in many shops to start treating each change order as isolated. The change is developed, tested just in its host program, and moved into production. For a time this works, and system quality seems to improve (the growth phase). Then decay sets in. Because of a multiplicity of "one-at-a-time" changes, the system becomes a minefield of lurking unforeseen consequences, and each new change takes longer to make. The phenomenon is amplified by helpful developers, whose friends in the user community keep asking for "just this little change while you're in there." The little changes do not, of course, get documented. In time, the application falls behind the rate of change in business and technology.
Try to think more in terms of release cycles for groups of changes, with rigorous systemwide testing. Take the pressure off with temporary patches, but be ruthless about getting rid of them at next release.
-- Wes Melling
Humayun Beg, TAC Expert, has more than 18 years of experience in business IT management, technology deployment, and risk management. He has significant experience in all aspects of systems management, software development, and project management, and has held key positions in directing major IT initiatives and projects.
Wes Melling, TAC Expert, has more than 40 years of IT experience with a focus on enterprise IT strategies. He is founder and principal of Value Chain Advisors, a consulting boutique specializing in manufacturing supply-chain optimization. He has been a corporate CIO, a Gartner analyst, and a product strategist at increasingly senior levels.
Building A Mobile Business MindsetAmong 688 respondents, 46% have deployed mobile apps, with an additional 24% planning to in the next year. Soon all apps will look like mobile apps – and it's past time for those with no plans to get cracking.