Editor's Note: Welcome to SmartAdvice, a weekly column by The Advisory Council (TAC), an advisory service firm. The feature answers three questions of core interest to you, ranging from career advice to enterprise strategies to how to deal with vendors. Submit questions directly to [email protected]
Question A: What criteria should be included in the due-diligence assessment of IT at an acquisition candidate?
Our advice: The objectives of any due-diligence assessment of IT are fivefold:
In performing a due-diligence assessment of the IT department, an active template is a useful guide in the discovery process. This template should begin with an overview of the IT department. What's the structure of the IT organization? What staff comprises the organization? Is staffing adequate or inadequate? Is there an up-to-date strategic technology plan? What's the current fiscal year budget and previous year's actual expenses? Are results of any recent systems audits available? What's the book value, depreciation, or lease and maintenance schedule for all IT assets? Does the firm have an up-to-date technology asset inventory?
Next, examine the data center and any networks that are in place. What are the locations of the data center(s)? What are the host computer(s), their make, model, and configuration? What's the host communications network like, including all communications processor(s), communications software, etc.? Is there statistical information available on uptime or reliability reports for the past 12 months and incident reporting or incident resolution for past 12 months? Concerning the communication network, what types of circuits are installed? What's the topology of the local-area network by location?
In the area of application software development, if such development is done at the company, obtain a list describing the programming development environment (by each system in place). See that the list includes all programming languages and development tools used and outlines the type of change-management or version controls in place. Are any current system enhancements in progress or are there any planned system enhancements? What's the state of documentation at the company? Is it ongoing or only by exception? Is there documentation on systems policies and procedures? Are operations policies and procedures in place for all major functional areas? Is there application-development documentation in place? Does it cover current standards and policies, project-management methodologies, documentation development, and ongoing support? Is there documentation covering training procedures?
Concerning any services provided by outside vendors, there should be a description of each service contracted for, its annual cost, contract terms, and remaining time on the contract. Obtain a copy of each contract and check for its associated service-level agreements. What are the mission-critical information systems supported by the outside vendors? What are the service providers' information-security policies? Are there reports or measurement tools for monitoring vendor performance? How adequate are the contingency plans of all vendors providing outside services? Has management ensured that the institution's contingency and business-resumption plans are compatible with and complement the service providers? Are any of these service providers located in a foreign country?
Finally, taking a look at the desktop and laptop environment, can the firm provide an inventory of all desktop and laptop hardware deployed? Is there an inventory of all installed software packages on each piece of hardware? Does custom software reside on the desktop or laptops? If so, is there an inventory of the custom developed software?
-- Stephen Rood
Question B: What backup technologies would you recommend for a data center with 100 terabytes of data?
Our advice: There's no question that backup technologies haven't kept up with companies' seemingly insatiable demand for data. In combination with the increased pressure to maintain 24-by-7 application availability, the window of downtime available for backup and maintenance is rapidly being eliminated.
The issues to solve are bandwidth, disk capacity, and recovery speed. Even with the latest technology, data transfer takes time. The most obvious method of protecting the data center is to build fully redundant systems clusters and deploy SAN (storage area network) devices. These are good options, but they're expensive and they don't solve the offsite and archive problem. Getting the data offsite can be a major headache; even using gigabit-per-second switches, you can only stuff so much down a network pipe. There's been some work on improving the algorithms for syncing data to minimize the amount that needs to be transferred, which helps with syncing the off-sites and hot fail-over systems.
Looking at the current available technology for protecting such massive amounts of data, there are huge tape-archive systems with hundreds or thousands of tapes, but with the latest petabyte tape systems, the technology is already close to the limits in terms of recovery speed and data management. The limits of disk capacity also cause problems. The industry is impatiently waiting for long-promised higher-density and -speed disks--15,000 RPM disks are coming onto the market, but it's unclear how much faster disks can go.
The technology to fully protect petabytes of data at a reasonable cost isn't quite here yet. In the meantime, be prepared to spend large amounts on clusters and SAN devices. The option of a redundant hot fail-over data center might seem costly, but put in terms of business continuity, it could be the best option.
-- Beth Cohen
Question C: With Capability Maturity Model on the way out, what do we need to know about CMMI?
Our advice: CMMI, short for "Capability Maturity Model Integration," is the latest in a line of frameworks published by the Software Engineering Institute at Carnegie Mellon University that are aimed at evaluating and improving the processes used to develop products and services. Its predecessor, CMM for software, debuted in 1990 and has become the de facto standard for measuring and improving the software-development process. CMMI is slated to supplant CMM; the Software Engineering Institute ended CMM training in 2003 and is letting all CMM-assessor licenses lapse as of the end of 2005.
CMMI is an integrated suite of products consisting of a framework, several implementation models, an assessment method, and training materials. It consolidates, extends, and reconciles differences between three pre-existing Software Engineering Institute models--the CMM for software, the Systems Engineering Capability Model, and the Integrated Product Development Capability Maturity Model.
CMM is characterized by its five successively higher levels of process maturity (Initial, Repeatable, Managed, Defined, and Optimizing). To attain a level, all of the requisite key process areas must be in place, even if some of those areas are of lesser importance to a particular organization.
To offer more flexibility, CMMI provides two ways to approach process improvement. Organizations can choose to follow either the "staged" or "continuous" representation. The staged representation is basically the same as CMM's "stairstep" approach, with some changes and additions to the associated key process areas. The other representation, called "continuous," focuses on process areas rather than the attainment of organizational maturity levels. It defines four categories of process areas--Process Management, Project Management, Engineering, and Support--and six capability levels applicable to those areas. Organizations choose where they want to focus their improvement efforts, the degree to which they want to make improvements, and then aim for the corresponding capability level. In both the staged and continuous approaches, the assessment criteria for each key process area are the same.
Key Process Areas
CMMI has a greater number of key process areas than CMM (22 versus 18), and differentiates between basic and advanced ones. Several of the CMM key process areas were restructured or combined, and new areas (risk management, integration, verification, and validation) added, making it somewhat difficult to map old and new key process areas one-for-one. In addition, CMMI's key process areas are distributed differently among the four higher maturity levels.
Making The Transition
Organizations in the midst of a CMM assessment must decide when and how they will transition to CMMI. CMM will continue to be a viable standard for the near future, so organizations close to completing a CMM assessment should continue on that path rather than incur the costs and delay of switching midstream to CMMI. Once an organization has attained its CMM assessment, it can plan to add CMMI's new key process areas and adjust existing ones, and then seek a CMMI assessment if the cost and benefits can be justified.
For organizations that have completed a CMM assessment, the decision on when and if to switch to CMMI is a practical one. CMMI is more complex and broad than CMM, and its associated assessment process is more time consuming, rigorous, and costly, which may deter many internal IT organizations. Considering that the goal of CMMI is to improve the software-development process, rather than to attain a specific certification level, internal IT organizations that are satisfied with the state of their development processes have little incentive to move to CMMI in the short term.
-- Ian Hayes
Stephen Rood, TAC Expert, has more than 24 years experience in the IT field specializing in developing and implementing strategic technology plans for organizations as well as senior project management and help-desk operations review. His consulting experience has included designing and implementing a state-of-the-art emergency 911 call center for the city of Newark, N.J., managing technology refreshes for a major nonprofit entertainment organization as well as a large, regional food broker, and he also worked at Coopers & Lybrand, General Foods, and Survey Research. He is the author of the book "Computer Hardware Maintenance: An IS/IT Manager's Guide" that presents a model for hardware maintenance cost containment.
Beth Cohen, TAC Thought Leader, has more than 20 years of experience building strong IT-delivery organizations from user and vendor perspectives. Having worked as a technologist for BBN, the company that literally invented the Internet, she not only knows where technology is today but where it's heading in the future.
Ian Hayes, TAC Thought Leader, has more than 26 years experience in improving the business returns generated by IT investments. He helps companies focus on value-creating projects and services by better targeting IT investments, improving the effectiveness of IT execution, optimizing the sourcing of IT activities and establishing measurement programs that tie IT performance to business value delivered. He is the author of three IT books, most recently "Just Enough Wireless Computing," and hundreds of articles, a popular speaker at conferences, and his clients include many of the world's top corporations.