One of the biggest transitions in the history of the Internet -- perhaps of networking itself -- has occurred over the last couple years, and yet nary a peep has been heard about it. Perhaps this is because the transition took place in phases, making it less detectable. Let me explain.
The Internet has transitioned from one of vertically accessed stored content to horizontally cached data. In the data world, the terms "vertical" or "north/south" are often used to describe traffic traveling from a consumer/user/subscriber to a content provider/service provider/data center operator -- for example, someone on a PC using a home Internet connection via an ISP to access content stored on a server owned by a content provider. For a long time, a lot of focus was put on optimizing this vertical connection to improve the end-user experience.
But a funny thing happened as the Internet grew larger and larger: It went horizontal.
IT still tends to think of the Internet as vertical. When we consider, say, cloud storage accounts from the likes of iCloud or Dropbox, we imagine that an end-user's 5 GB of storage exists as a dedicated partition on a hard drive, on a server, somewhere in the cloud provider's data center. Today, nothing could be further from the truth.
The reason for the shift is pretty simple. The rapid rise of applications like Google Maps outgrew the ability of indexed, centralized databases to keep up. A typical Google search that takes less than one second could take seven minutes on a centralized database. So the databases that run the Internet quickly migrated to non-centrally indexed, horizontally distributed databases like Hadoop. These new distributed databases fueled the growth of the cloud and social media services to astronomical heights. Their strength is that they can grow as fast as you can add servers to your data center.
Their weakness is that to grow, you need to keep adding servers to your data center.
Eventually, the cloud outgrew the cloud. Data centers could not get bigger. They needed to be interconnected in such a way that they operated like a single data center. You do that by linking all clusters within a data center and then interconnecting all those data centers together with big, fat data pipes. You make it work with load balancing, virtual machine migration, data replication, localized caching -- in other words, you fix the vertical problem with lots of horizontal bandwidth.
Today, data center operators (DCOs) are very open about their traffic metrics. One DCO might say the ratio of outside vertical traffic versus horizontal private traffic on its WAN is 1:4 -- that is, it has more internal traffic than in/out traffic from customers. Another DCO pegs its ratio of vertical to horizontal traffic on the internal LAN at over 10,000:1. Tracing the internal server traffic caused by a single Facebook "like" reveals that within milliseconds, hundreds of servers were hit across the globe.
I find it somewhat amusing that today, the virtual "cloud" that was so hyped years ago now actually does resemble a cloud. All the top DCOs have built compute, storage, and networking resources that are globally interconnected to behave like a single entity.
[40% of respondents to our 2014 Next-Gen WAN survey have 16 or more branch or remote office sites connecting to their headquarters or primary data centers. How's performance? Read the full report.]
IT teams need to take this into account as their organizations depend more and more on software and infrastructure services. Ask how your providers plan to optimize the performance of the global cloud. For example, one DCO said in June that its cloud performance is being severely constrained by the amount of global horizontal bandwidth that is available. In other words, it can't beg, borrow, or steal enough global transport to achieve the full potential of its compute clusters, which it said could be three to five times more powerful given more bandwidth.
In response, WAN transport companies are focusing on building bigger, fatter pipes, though sometimes the tradeoff makes them dumber. Ethernet switch vendors are focusing on how to make flatter, more horizontal architectures that can interconnect 10,000, or 100,000, or even up to 1 million servers with the fewest layers possible, with two layers being the end goal.
Some DCOs are even questioning the need for Ethernet as the go-between. If you think about it, all they need is to interconnect clusters of compute/storage, which speak PCIe. Is translating everything to Ethernet and back again really the best way to do this? It currently takes a whole lot of NICs, switches, and routers to perform the interconnection with Ethernet.
While the transition from vertical to horizontal architectures and east-west traffic dominating north-south might have been subtle and stealthy, the impact on the industry will not be as quiet. Start asking questions about how your providers plan to get horizontal.
The Internet of Things demands reliable connectivity, but standards remain up in the air. Here's how to kick your IoT strategy into high gear. Get the new IoT Goes Mobile issue of InformationWeek Tech Digest today. (Free registration required.)