320 likes | 446 Vues
In his talk, Bill Jia, PhD, Manager of Performance and Capacity Engineering at Facebook, explores the vast infrastructure that handles the platform's staggering user base, which reached 1.15 billion in mid-2013. With 84% of active users outside the U.S. and billions of daily interactions, optimizing data centers and server technologies is critical. This presentation covers innovative hardware solutions, cost efficiencies, and future advancements to enhance Facebook’s operational capabilities while managing heat and energy efficiency for sustainability.
E N D
Facebook: Infrastructure at Scale Using Emerging Hardware Bill Jia, PhD Manager, Performance and Capacity Engineering Oct 8th, 2013
of monthly active users are outside of the U.S. (as of Q2’13) 84% Facebook Scale Data Centers in 5 regions.
Facebook Stats • 1.15 billion users (6/2013) • ~700 million people use facebook daily • 350+ million photos added per day (1/2013) • 240+ billion photos • 4.5 billion likes, posts and comments per day (5/2013)
Cost and Efficiency • Infrastructure spend in 2012(from our 10-K): • “...$1.24 billion for capital expenditures related to the purchase of servers, networking equipment, storage infrastructure, and the construction of data centers.” • Efficiency work has been a top priority for several years
Search Photos UDB ADS-DB Tao Leader Msg Others Architecture Front-End Cluster Adsfew racks Web Multifeedfew racks Other smallservices Cache Back-End Cluster Service Cluster
L L L L L L Life of a “hit” Front-End Back-End Front end Back end requeststarts MC Feed agg MC Web Database Time MC Ads requestcompletes MC
Five Server Types • Advantages: • Volume pricing • Re-purposing • Easier operations - simpler repairs, drivers, DC headcount • New servers allocated in hours rather than months • Drawbacks: • 40 major services; 200 minor ones - not all fit perfectly • The needs of the service change over time.
Efficiency at FB • Data Centers • Heat management, electrical efficiency & operations • Servers • “Vanity free” design & supply chain optimization • Software • Horizontal wins like HPHP/HHVM, cache, db, web & service optimizations
Next Opportunities? • Disaggregated Rack • Better component/service fit • Extending component useful life • Developing New Components • Alternatives in CPU, RAM, Disk & Flash
Leaf Aggregator Network Switch L A Type-6 Server L A Type-6 Server . . . . => L A Type-6 Server Type-6 Server A rack of news feed servers... 80 processors 640 cores COMPUTE RAM 5.8 TB . . . STORAGE 80 TB Type-6 Server FLASH 30 TB The application lives on a rack of equipment--not a single server.
Compute • Standard Server • N processors • 8 or 16 DIMM slots • no hard drive - small flash boot partition. • big NIC - 10 Gbps or more
Ram Sled • Hardware • 128GB to 512GB • compute: FPGA, ASIC, mobile processor or desktop processor • Performance • 450k to 1 million key/value gets/sec • Cost • Excluding RAM cost: $500 to $700 or a few dollars per GB
Storage Sled (Knox) • Hardware • 15 drives • Replace SAS expander w/ small server • Performance • 3k IOPS • Cost • Excluding drives: $500 to $700 or less than $0.01 per GB
Flash Sled • Hardware • 175GB to 18TB of flash • Performance • 600k IOPS • Cost • Excluding flash cost: $500 to $700
Three Disaggregated Rack Wins • Server/Service Fit - across services • Server/Service Fit - over time • Longer useful life through smarter hardware refreshes.
Longer Useful Life • Today servers are typically kept in production for about 3 years. • With disaggregated rack: • Compute - 3 to 6 years • RAM sled - 5 years or more • Disk sled - 4 to 5 years depending on usage • Flash sled - 6 years depending on write volume
Disaggregated Rack • Strengths: • Volume pricing, serviceability, etc. • Custom Configurations • Hardware evolves with service • Smarter Technology Refreshes • Speed of Innovation • Potential issues: • Physical changes required • Interface overhead
Approximate Win Estimates Conservative assumptions show a 12% to 20% opex savings. More aggressive assumptions promise between 14% and 30% opex savings. * These are reasonable savings estimates of what may be possible across several use cases.
20 years of the same... • The servers we deploy in the datacenter are pretty much identical to a 386 computer from 20 years ago. • New Technologies: • Math coprocessor • SMP - N processors per server • Multi-core processors • GPUs - vector math • Flash memory • The exponential improvement of CPU, RAM, disk and NIC density has been incredible.
CPU Today: Server CPUs are more compute-dense versions of desktop processors. Mobile phones are driving development of lower power processors. System on a Chip (SoC) processors for this market integrate several modules into a single package to minimize energy consumption and board real-estate. Next: Smaller Server CPUs derived from mobile processors may become compelling alternatives.
RAM Today: The DDR standard RAM modules were developed for desktop servers. The DDR standard is difficult to hit on a cost/performance basis with alternative technologies. Next: Lower latency memory technology is interesting for “near RAM.” Slower memory (between RAM and Flash) could be accommodated for “far RAM.”
Storage (ZB of data; 1 ZB = 1 billion TB) Today: Photo and data storage uses 3.5” disk drives almost exclusively. Applications and file systems expect low-latency writes and reads and unlimited write cycles. By changing these assumptions optical and solid-state media may be viable. Next: Cold and big capacity storage. Lower power density and slower performance
Flash (solid state storage) Today: Flash SSD drives are commonly used in databases and applications that need low-latency high-throughput storage. SLC – Premium MLC: Expensive, performant, high enduring (10K) MLC: Less enpensive, performant, lower enduring (3K) TLC: Cheap, performant (100-300) Worm TLC: Extremely cheap (1)
Network Interfaces Today: 10 Gbps NICs are becoming standard in servers. 40Gbps are emerging and in production Next: By focusing on rack distances (3 meters) and allowing proprietary technologies, 100 Gbps could be affordable in a few years.