"It's what made the difference," says Alan Paller, director of research at the SANS Institute.
At least six of the 13 root servers were attacked, according to the report, but only two of them were noticeably affected: g-root, which is run by the U.S. Department of Defense and is located in Ohio, and l-root. Neither one was using Anycast.
The report notes that the engineers who run the root servers had made a specific decision to not use Anycast on every single server, so had left it off of g-root and l-root on purpose. "Common practice among Internet engineers across the globe is to make sure that the systems they use vary so that there is no single point of failure," the report states. "For example, many of the normal DNS servers that companies and even individuals run are built on top of Windows, but others are on Linux, some are on Mac OS X, some are on NetWare, Unix, OS/2 and so on... If everyone ran the same software on the same operating system, there is the risk that a specific security hole could take down the whole system. Running a wide variety hugely reduces that risk."
However, the report goes on to say that since Anycast proved itself so well, it will be moved onto all of the roots.
Sergey Bratus, a senior research associate with the Institute for Technology Studies at Dartmouth College, noted that the root engineers had needed to see how Anycast would hold up under a major attack. Once the technology proved itself, they became confident enough to use it everywhere.
The report goes on to note that Anycast wasn't the only thing keeping the attack at bay and keeping Internet traffic flowing without pause. First off, engineers in charge of the roots around the globe maintained ifairly constant communication, sharing information about the attack and ways they were battling it. "It's the only way they can act," says Paller. "If they don't have data about what's happening elsewhere, how will they know how to act? They've got to communicate."
And the engineers also employed two different defenses.
First off, they tried to basically suck up the extra queries by adding extra bandwidth as the attack was coming in. That made room for the legitimate queries to make their way through the deluge of fraudulent queries. And while they did that, they also tried to find patterns in the malicious queries coming in an effort to filter them out, cutting the attack off at the knees.