210 likes | 232 Vues
Evolution of High Performance Cluster Architectures. David E. Culler culler@cs.berkeley.edu http://millennium.berkeley.edu/ NPACI 2001 All Hands Meeting. Much has changed since “NOW”. inktomi.berkeley.edu. NOW 110 UltraSparc +Myrinet. NOW1 SS+ATM/Myrinet. NOW0 HP+medusa FDDI.
E N D
Evolution of High Performance Cluster Architectures David E. Culler culler@cs.berkeley.edu http://millennium.berkeley.edu/ NPACI 2001 All Hands Meeting
Much has changed since “NOW” inktomi.berkeley.edu NOW 110 UltraSparc +Myrinet NOW1 SS+ATM/Myrinet NOW0 HP+medusa FDDI
The Basic Argument • performance cost of engineering lag • miss the 2x per 18 months • => rapid assembly of leading edge HW and SW building blocks • => availability through fault masking, not inherent reliability • emergence of the “killer switch” • opportunities for innovation • move data between as fast as within machine • protected user-level communication • large-scale management • fault isolation • novel applications
Clusters Took Off • scalable internet services • only way to match growth rate • changing supercomputer market • web hosting
Engineering the Building Block • argument came full circle in ~98 • wide-array of 3U, 2U, 1U rack-mounted servers • thermals and mechanicals • processing per square-foot • 110 AC routing a mixed blessing • component OS & drivers • became the early entry to the market
Emergence of the Killer Switch • ATM, Fiberchannel, FDDI “died” • ServerNet bumps along • IBM, SGI do the proprietary thing • little Myrinet just keeps going • quite nice at this stage • SAN standards shootout • NGIO + FutureIO => Infiniband • specs entire stack from phy to api • nod to IPv6 • big, complex, deeply integrated, DBC • Gigabit EtherNet steamroller... • limited by TCP/IP stack, NIC, and cost
Switch Network Layer-7Switch Unexpected Breakthru: layer-7 switches • fell out of modern switch design • process packets in chunks • vast # of simultaneous connections • many line-speed packet filters per port • can be made redundant • => multi-gigabit cluster “front end” • virtualize IP address of services • move service within cluster • replicate it, distribute it • high-level xforms • fail-over, • load management
e-Science any useful app should be a service
Protected User-level messaging • Virtual Interface Architecture (VIA) emerged • primitive & complex relative to academic prototypes • industrial compromise • went dormant • Incorporated in Infiniband • big one to watch • Potential breakthrough • user-level TCP, UDP with IP NIC • storage over IP
Management • workstation -> PC transition a step back • boot image distribution, OS distribution • network troubleshoot and service • multicast proved a powerful tool • emerging health monitoring and control • HW level • service level • OS level still a problem
Rootstock Local Rootstock Server UC Berkeley Internet Rootstock Server Local Rootstock Server Local Rootstock Server
Node A Node B Node C Node D rexecd rexecd rexecd rexecd Cluster IP Multicast Channel vexecd(Policy A) vexecd(Policy B) “Nodes AB” minimum $ rexec %rexec –n 2 –r 3 indexer Ganglia and REXEC Also: bWatch BPROC: Beowulf Distributed Process Space VA Linux Systems: VACM, VA Cluster Manager
Network Storage • state-of-practice still NFS + local copies • local disk replica management lacking • NFS doesn’t scale • major source of naive user frustration • limited structured parallel access • SAN movement only changing the device interface • Need cluster content distribution, caching, parallel access and network striping see: GPFS, CFS, PVFS, HPSS, GFS,PPFS,CXFS, HAMFS,Petal, NASD...
Service Service Service DDS lib DDS lib DDS lib Storage “brick” Storage “brick” Storage “brick” Storage “brick” Storage “brick” Storage “brick” Distributed Persistent Data Structure Alternative Clustered Service Distr Hash table API Redundant low latency high xput network System Area Network Single-node durable hash table
Adaptive Parallel Aggregation Static Parallel Aggregation A D A A A A D D D D A D Distributed Queue A D A D “Performance Available” Storage
Application Software • very little movement towards harnessing architectural potential • application as service • process stream of requests (not shell or batch) • grow & shrink on demand • replication for availability • data and functionality • tremendous internal bandwidth • outer-level optimizations, not algorithmic
Time is NOW • finish the system area network • tackle the cluster I/O problem • come together around management tools • get serious about application services