Boundary has updated its operations intelligence system to collect data from a variety of data center monitoring systems. The data is streamed through Boundary's analytics engine to give operations managers a near real-time view of what's going on with their applications.
It may take a Cold War-era word to sum up what the latest version of Boundary does: collectivization. As of Tuesday, data is streamed to it from configuration and provisioning systems Chef and Puppet or the Splunk machine data identifier and analyzer. It comes in as well from the market-leading application performance monitoring (APM) systems New Relic and AppDynamics. Open source systems management Nagios contributes more, as does the Plexxi software-defined networking (SDN) switch. Additional sources will be added as both Boundary and its customers add adapters to the library currently available.
Boundary doesn't offer the deep-dive down into an application's code to find the part that's not working, the way an AppDynamics implementation does. Instead, it assembles a higher-level overview of how all applications in a data center are functioning. The analysis is performed through the Boundary analytics engine at the heart of its online service; there is no on-premises installation of the engine. A small "meter" or network traffic collector is installed on each virtual machine being monitored. It streams packet header information on incoming and outgoing VM traffic to the engine.
The goal is to analyze all elements of what Boundary CEO Gary Read called "application chatter," the queries an application makes to the database server or the messages it sends to the Web server.
[ Want to learn how New Relic made a name for itself monitoring apps? See Obama's Developer Brain Trust: Inside the Big Battle. ]
If that sounds like a lot of data plugged into the analytics engine, in many cases, it is. Boundary is available free to a user generating one GB of data a day or less. Data centers that generate up to five GBs a day pay a $495 a month subscription fee. That's roughly equivalent to five to eight servers running 25 to 40 virtual machines, said Read. Larger users, generating up to 25 GBs, pay $1,495 a month.
In an interview, Read said deploying the C-code-based meters is simple through commonly used deployment systems, such as Puppet Labs' Puppet. It's also non-intrusive to the operation of the application. The goal in generating all the data streams to the analytics engine is to spot when something out of the ordinary is happening with the application and alert a systems administrator.
When different parts of an application are distributed on different servers in the data center, or even located as a service outside the data center, Boundary can still detect and map it as an essential component of the app. A customer's online Boundary dashboard is updated each second from the data streams and graphically shows overall operational status and individual trouble spots.
Read said Boundary has over 1,000 users of the free version of its service and 95 paying customers, including the Joyent cloud service, StumbleUpon, GitHub and Gilt Groupe. One user, Michael Hood, lead engineer at online family tracking and communication service Life360, said being able to see how his firm's applications are performing on Amazon Web Services' cloud is crucial to the business.
Among his 35 million users, many are families with children "who depend on the information we provide to manage their day-to-day lives. Life360's apps, for example, are repeatedly querying the iPhones and Android phones carried by family members, then posting their locations on a Google Maps overlay. If the application goes down, Life360 is likely to hear from parents who are depending on those postings.
"When Amazon went down Christmas Eve, we immediately saw comments on social media by the people having trouble using the application," Hood recalled. Short of an outage, problems sometimes develop inside one Amazon availability zone, he noted. "We can route the traffic differently (to another zone) to avoid any performance issues," said Hood. Life360 is using information from Boundary once every two months or so to take such defensive actions and guarantee prompt responses to end users, he added.
As one example of the data flows involved, Life360 collects a billion pieces of information a day to load it into its applications. Boundary is collecting information on each transaction in Life360's activity and all of its other customers' applications. That amounts to collecting 7.5 TBs of application performance data a day. The Boundary analytics engine "acts like a prism and splits it up" into information about each separate application, Read said.
The service is not a direct competitor to predecessor application monitoring systems, such as New Relic or App Dynamics. Spokesmen for each participated in Boundary's announcement of enhanced services Tuesday. Boundary can use its data to provide color-coded alerts to managers. It can also remove alerts and problem highlights once they've been resolved, automatically or otherwise.
In one case, managers were alerted to an application generating an abnormal number of network retransmits and found defective hardware in the network. The host of an online gaming system saw subnormal performance of a game server cluster and found one server had been given the wrong bandwidth on its network interface card.
Boundary would be able to detect an abnormal amount of inbound traffic from a country that the application had previously had little contact with, and alert managers to a possible denial of service attack, said Read.
Virtualization is rapidly evolving into a core element of next-generation data centers. This expanded role places new strains on the network. The Networking In A Virtualized World report explores the technical issues exposed by virtualized infrastructure and looks at standards, technologies and best practices that can make your network ready to support virtualization. (Free registration required.)