Busier Networks Create Smoother Traffic Flow, Says Bell Labs

Bell Labs' research contradicts assumptions about flow of data packets through busy networks.
New research contradicts previous assumptions about how packets of data flow through busy Internet exchanges--and may lead to less expensive equipment, says Bell Laboratories, research arm of Lucent Technologies Inc.

Packets traveling on low-traffic networks tend to behave sporadically--like 10 cars arriving at a stop sign simultaneously instead of at regularly spaced intervals. Conventional wisdom held that the busiest networks would experience even more "bursty" behavior, requiring larger packet buffers to help Internet routers manage traffic volatility. But Bell Labs' research shows the opposite is true: High-capacity networks have more regular traffic. The intermingling of more packets tends to create a smoother data flow, says Bill Cleveland, who worked on the project for Bell Labs.

Cleveland's team has been collecting and analyzing data for the past two years, using busy network exchanges at Bell Labs and several universities. Still, it's not the volume of data, but the way it was analyzed that makes Bell's study different. Most previous traffic analysis has focused on byte counts and packet counts in fixed intervals of time--such as the arrival of packets in one-hundred-millisecond intervals, says Cleveland. Bell Labs' S-Net software allowed the team to look at packet arrival times and sizes individually--the variables that matter most for traffic engineering issues, he says. "If all I can see is how many packets arrive in one time interval, I won't understand how that router is really performing as it handles each and every packet." The implications: rethinking the need for larger packet buffers and possible decreased cost of engineering the Internet, he says. "It should lead to more efficient equipment design and more effective traffic engineering."

Not everyone agrees. "As more networks come into play, the network topology is going to change," says Steve Coya, executive director of the Internet Engineering Task Force, an international group that sets standards for the Internet. It's helpful to have research that confirms that Internet service providers have done a good job provisioning around the potential bottlenecks, he says. But he questions the research's long-term validity, because the nature of Internet traffic will probably be vastly different in two or three years. "The industry is so wacko that planning for products based on research that's just conducted is basically putting your eggs in a basket and hoping no one drops it."

But Cleveland says the research is more than just a snapshot of the Internet today. "We can explain why this is occurring--it's because packets are being intermingled," he says. High-traffic networks will continue to operate without bursty behavior, Cleveland says, as long as packets mingle equally with each other on a data link and as long as no single connection can assume control of the link.

Editor's Choice
Mary E. Shacklett, President of Transworld Data
James M. Connolly, Contributing Editor and Writer