Sign up FAST! Login

Facebook's Architecture: Built to Scale


Stashed in: DevOps, Facebook!

To save this post, select a stash from drop-down menu or type in a new one:

It turns out the original source is not the above blog post, but Quora:

PHP + HipHop, Thrift, Java, MySQL, Memcached, Cassandra Hadoop's HBase, Hive, Scribe, Scribe-HDFS, BigPipe, Varnich, Haystack, Cell, Erlang.

Facebook runs over 60,000 servers.

Their Oregon datacenter is based on entirely self-designed hardware aka Open Compute Project.

Numbers: 300 TB of data is stored in Memcached processes. Their Hadoop and Hive cluster has 3000 servers with 8 cores, 32 GB RAM, 12 TB disks that is a total of 24k cores, 96 TB RAM and 36 PB disks.

Scaling: 100b hits/day, 50 billion photos, 3 trillion objects cached, 130 TB of logs per day as of july 2010.

Quite the beast! It amazes me that this has been built, managed and maintained by ~3000 employees? It would be nice to know how the system evolved from php/mysql to what it is today; when certain technologies were brought in to handle the issues and challenges they faced.

Actually, you'd probably enjoy Scaling to 500 million users and beyond by Robert Johnson on the Facebook blog.

He's brilliant and was there for a lot of the scaling decisions.

You can read the whole evolution by scanning the Facebook Engineering Blog, too.

You May Also Like: