1 / 47

Data Center Scale Computing

Data Center Scale Computing.

azriel
Télécharger la présentation

Data Center Scale Computing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Data Center Scale Computing If computers of the kind I have advocated become the computers of the future, then computing may someday be organized as a public utility just as the telephone system is a public utility. . . . The computer utility could become the basis of a new and important industry. John McCarthy MIT centennial celebration (1961) Presentation by: Ken Bakke Samantha Orogvany John Greene

  2. Outline • Introduction • Data Center System Components • Design and Storage Considerations • Data Center Power supply • Data Center Cooling • Data center failures and fault tolerances • Data center repairs • Current challenges • current research, trends, etc • Conclusion

  3. Warehouse-scale computer Designed to run massive internet applications Individual applications run on thousands of computers Homogeneous hardware and system software Central management for a common resource pool The design of the facility and the computer hardware is integrated Data Center VS Warehouse Scale Computer • Data center • Provide colocated equipment • Consolidate heterogeneous computers • Serve wide variety of customers • Binaries typically run on a small number of computers • Resources are partitioned and separately managed • Facility and computing resources are designed separately • Share security, environmental and maintenance resources

  4. Need for Warehouse-scale Computers • Renewed focus on client-side consumption of web resources • Constantly increasing numbers of web users • Constantly expanding amounts of information • Desire for rapid response for end user • Focus on cost reduction delivering massive applications. • Increased interest in Infrastructure as a Service (Iaas)

  5. Performance and Availability Techniques • Replication • Reed-Solomon codes • Sharding • Load-balancing • Health checking • Application specific compression • Eventual consistency • Centralized control • Canaries • Redundant execution and tail tolerance

  6. Major system components • Typical server is 4 CPU - 8 Dual threaded cores yielding 32 cores • Typical rack - 40 servers & 1 or 10 Gbps ethernet switch • Cluster containing cluster switch and 16 - 64 racks • A cluster may contain tens of thousands of processing threads

  7. Performance advantage of a cluster built with large SMP server nodes (128-core SMP) over a cluster with the same number of processor cores built with low-end server nodes (four-core SMP), for clusters of varying size. Low-end Server vs SMP • Latency 1000 time faster in SMP • Less impact on applications too large for single server

  8. Brawny vs Wimpy • Advantages of wimpy computers • Multicore CPUs carry a premium cost of 2-5 times vs multiple smaller CPUs • Memory and IO bound applications do not take advantage of faster CPUs • Slower CPUs are more power efficient • Disadvantages of wimpy computer • Increasing parallelism is programmatically difficult • Programming costs increase • Networking requirements increase • Less tasks / smaller size creates loading difficulties • Amdahl’s law impacts

  9. Design Considerations • Software design and improvements can be made to align with architectural choices • Resource requirements and utilization can be balanced among all applications • Spare CPU cycles can be used for process intensive applications • Spare storage can be used for archival purposes • Fungible resources are more efficient • Workloads can be distributed to fully utilize servers • Focus on cost-effectiveness • Smart programmers may be able to restructure algorithms to match a more inexpensive design.

  10. Storage Considerations • Private Data • Local DRAM, SSD or Disk • Shared State Data • High throughput for thousands of users • Robust performance tolerant to errors • Unstructure Storage - (Google - GFS) • Master plus thousnads of “chunk” servers • Utilizes every system with a disk drive • Cross machine replication • Robust performance tolerant to errors • Structured Storage • Big Table provides Row, Key, Timestamp mapping to byte array • Trade-offs favor high performance and massive availability • Eventual consistency model leaves applications managing consistency issues

  11. Google File System

  12. WSC Network Architecture • Leaf Bandwidth • Bandwidth between servers in common rack • Typically managed with a commodity switch • Easily increased by increasing number of ports or speed of ports • Bisection Bandwidth • Bandwidth between the two halves of a cluster • Matching leaf bandwidth requires as many uplinks to fabric as links within a rack • Since distances are longer, optical interfaces are required.

  13. Three Stage Topology Required to maintain same throughput as single switch.

  14. Network Design • Oversubscription ratios of 4-10 are common. • Limit network cost per server • Offloading to special networks • Centralized management

  15. Service level response times Consider servers with 99th, 99.9th and 99.99th latency > 1s vs # required service requests Selective replication is one mitigating strategy

  16. Power Supply Distribution • Uninterruptible Power Systems • Transfer switch used to chose active power input from either utility sources or generator • After a power failure, the transfer switch will detect the power generator and after 10-15 seconds, provide power • This power system has energy storage to provide additional protection between power failure of main utility power and when generators begin providing full load • Levels incoming power feed to remove spikes and lags from AC-feed

  17. Example of Power Distribution Units • Traditional PDU • Takes in power output from UPS • Regulates power with transformers to distribute power to servers • Handles 75-225 kW typically • Provides Redundancy by switching between 2 power sources

  18. Examples of Power Distribution • Facebook’s power distribution system • Designed to increase power efficiency by reducing energy loss to about 15% • Eliminates the UPS and PDU and adds on-board 12v battery for each cabinet

  19. Power Supply Cooling Needs • Air Flow Consideration • Fresh Air cooling • “Opening the windows” • Closed loop system • Underfloor systems • Servers are on raised concrete tile floors

  20. Power Cooling Systems 2-loop Systems Loop 1 - Hot Air/Cool air circuit (Red/Blue Arrows) Loop 2 - Liquid supply to Computer Room Air Conditioning Units and heat discharging

  21. Example of Cooling System Design • 3 - Loop System • Chiller sends cooled water to CRACs • Heated water sent from building to chiller for heat dispersal • Condenser water loop flows into cooling tower

  22. Cooling System for Google

  23. Estimated Annual Costs

  24. Estimated Carbon Costs for Power Based on local utility power generated via the use of oil, natural gas, coal or renewable sources, including hydroelectricity, solar energy, wind and biofuels

  25. Power Efficiency • Sources of Efficiency Loss • Overheading cooling systems, such as chillers • Air movement • IT Equipment • Power distribution unit • Improvements to Efficiency • Handling air flow more carefully. Keep cooling path short and separate hot air from servers from system • Consider raising cooling temperatures • Employ “free cooling” by locating datacenter in cooler climates • Select more efficient power system

  26. Data Center Failures • Reliability of Data Center • Trade off between cost of failures, along with repairing, • and preventing failures. • Fault Tolerances • Traditional servers require high degree of reliability and redundancy to prevent failures as much as possible • For data warehouses, this is not practical • Example: a cluster of 10,000 servers will have an average of 1 server failure/day

  27. Data Center Failures • Fault Severity Categories • Corrupted • Data is lost, corrupted, or cannot be regenerated • Unreachable • Service is down • Degraded • Service is available, but limited • Masked • Faults occur but due to fault tolerance, this is masked from user

  28. Data Center Fault Causes • Causes • Software errors • Faulty configs • Human Error • Networking faults • Faulty hardware • It’s easier to tolerate known hardware issues than software bugs or human error. • Repairs • It’s not critical to quickly repair individual servers • In reality, repairs are scheduled as a ‘daily sweep’ • Individual failures mostly do not affect overall data center health • System is designed to tolerate faults

  29. Google Restarts and Downtime

  30. Relatively New Class of Computers • Facebook founded in 2004 • Google’s Modular Data Center in 2005 • Microsoft’s Online Services Division in 2005 • Amazon Web Services in 2006 • Netflix added streaming in 2007

  31. Balanced System • Nature of workload at this scale is: • Large volume • Large variety • Distributed • This means no servers (or parts of servers) get to slack while others do the work. • Keep servers busy to amortize cost • Need high performance from all components!

  32. Imbalanced Parts • Latency lags bandwidth

  33. Imbalanced Parts • CPUs have been historical focus

  34. Focus Needs to Shift • Push toward SaaS will highlight these disparities • Requires concentrating research: • Improving non-CPU components • Improving responsiveness • Improving end-to-end experience

  35. Why does latency matter? • Responsiveness dictated by latency • Productivity affected by responsiveness

  36. Real Estate Considerations • Land • Power • Cooling • Taxes • Population • Disasters

  37. Google’s Data Centers

  38. Economical Efficiency • DC is non-trivial cost • Does not include land • Servers is bigger cost • More servers desirable • Busy servers desirable

  39. Improving Efficiency • Better components • Energy proportional (less use == less energy) • Power-saving modes • Transparent (e.g., clock-gating) • Active (e.g., CPU throttling) • Inactive (e.g., idle drives stop spinning)

  40. Changing Workloads • Workloads more agile in nature • SaaS Shorter release cycles • Office 365 updates several times per year • Some Google services update weekly • Even major software gets rewritten • Google search engine re-written from scratch 4 times • Internet services are still young • Usage can be unpredictable

  41. YouTube • Started in 2005 • Fifth most popular site within first year

  42. Adapting • Strike balance of need to deploy with longevity • Need it fast and good • Design to make software easy to create • Easier to find programmers • Redesign when warranted • Google Search’s rewrites removed inefficiencies • Contrast to Intel’s backwards compatibility spanning decades

  43. Future Trends • Continued emphasis on: • Parallelism • Networking, both within and to/from datacenters • Reliability via redundancy • Optimizing efficiency (energy proportionality) • Environmental impact • Energy costs • Amdahl’s law will remain major factor • Need increased focus on end-to-end systems • Computing as a utility?

  44. “Anyone can build a fast CPU. The trick is to build a fast system.” -Seymour Cray

  45. “Anyone can build a fast CPU. The trick is to build a fast system.” -Seymour Cray

More Related