Google has given everyone a rare look inside its server rooms and detailed how keeps up with the massive growth of its search business. In a blog post, Google Fellow Amin Vadat said that the company’s current network, Jupiter, can deliver a petabit per second of total throughput. That means each of its 100,000 total servers can randomly speak to each other at a speed of 10Gb/s, a hundred times faster than the first-generation network it created in 2005. To get there, Google did something surprising — it built its own hardware from off-the-shelf parts.
It was back in 2004 that Google decided to stray away from products by established companies like Cisco and build its own hardware using off-the-shelf chips from companies like Qualcomm. The aim was to put less onus on the hardware and more on software, something that’s impossible with off-the-shelf switches. Vadat said hardware switching is "manual and error prone… and could not scale to meet our needs." Using software switching was not only cheaper but easier to implement remotely — critical for a company whose bandwidth requirements have doubled (or more) every year.
Google considers its servers as a key advantage over rivals like Microsoft and Amazon, so why is it talking now? For one, it’s recently started selling its cloud services to other businesses, so it’s keen to brag about them. It’s also being pragmatic — its data requirements are now so huge that it needs academic help to solve configuration and management challenges. That’s why it’s presenting the paper at the Sigcomm networking conference in London, and if you’re in the mood for a (much) deeper dive, you can read it here.
Tags: DataCenters, google, GoogleCloud, Growth, Servers, Switches