Metrics Make The Difference With Performance Monitors
Performance products take the pulse of your systems, network, applications, and clients. Network Computing helps you understand the pros and cons of different measurements to determine which tool is right for you.
Performance-management products collect data passively, actively or in some combination of both. Each approach yields different results.
Passive data collection relies on actual traffic and derives performance statistics from real users and network and system conditions. An agent on an end user's PC, for instance, watches the network stack and reports throughput back to a central server. Lucent Technologies' Vital Suite uses passive agents. Passive probes monitor a data stream through a span port on a switch or a tap on a network segment. Probes from Compuware and WildPackets promiscuously decode the traffic to count on anything from packets to transactions.
Active measurements insert a known load through the replay of a transaction, such as getting a Web page, making it possible to compare actual results with expected results. Active methods send actual packets--sometimes transactions and sometimes just dummy payloads--through the network. Examples of active methods are found in the Web site-monitoring services provided by Gomez and Keynote Systems, which have agents around the world periodically downloading Web pages, measuring and recording results.
You'd think the outcome would be similar in active and passive cases, but in the absence of users requesting pages, the passive approach isn't aware of performance degradations. With active agents, in contrast, you can run a set of transactions across a newly upgraded application and be assured that everything is ready for real users.
Agents, More or Less
Agents are little busy-body programs that run on servers, clients, switches and routers collecting performance data. Agents might collect data about CPU utilization, application processes, database queries, network errors or even execution of transactions.
In response to concern about having to maintain agents on devices, some performance products use a remote data-collection method referred to as "agentless." The agent is still there, but typically it's an existing one that doesn't require any care beyond turning it on. The most common agents of this type are SNMP MIB II agents found on routers, switches and hosts.
SNMP agents suffer from bad press because they use open, readable passwords (at least in versions 1 and 2) and, like other agentless systems, the data they gather isn't specific. Yet some agentless approaches can provide many details about a systems environment. IBM's WebSphere performance interface, for example, monitors applications and processes running in the application server.
Passive probes don't give you any of the hassles of agent access, installation or maintenance--they just sit on the wire collecting data. And they usually come as appliances, so all that's necessary for implementation is network access and an IP address.
But probes have two inherent weak points. In a switched network with lots of segments, you need many probes or a large probe on an aggregated backbone segment. Because probes like NetScout Systems' nGenius can gather RMON Layer 2 metrics, aggregated segments provide local Layer 2 metrics and Layer 3 and above metrics for packets originating off the local segment. This is fine for most performance-management requirements, which focus on IT services at and above Layer 4, with the most valuable performance data coming from application transactions. However, aggregated segments have such high traffic volume that collecting so much data is like drinking from a fire hose--you have to manage what data gets filtered out. This care and feeding is an often overlooked part of probe-based data collection, and may require professional service and training to tune and maintain.
5 Top Federal Initiatives For 2015As InformationWeek Government readers were busy firming up their fiscal year 2015 budgets, we asked them to rate more than 30 IT initiatives in terms of importance and current leadership focus. No surprise, among more than 30 options, security is No. 1. After that, things get less predictable.
InformationWeek Tech Digest, Nov. 10, 2014Just 30% of respondents to our new survey say their companies are very or extremely effective at identifying critical data and analyzing it to make decisions, down from 42% in 2013. What gives?