1 / 19

Garden of Architectures CSG Workshop May 2008

Garden of Architectures CSG Workshop May 2008. Jim Pepin CTO. Disruptive change . Doubling (Moore’s Law or …) Transistors Multi-core Disk capacity New mass storage (flash, etc) Parallel apps Storage mgmt Optics based networking. Disruptive Change. Federated identity Large V/O

Télécharger la présentation

Garden of Architectures CSG Workshop May 2008

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Garden of ArchitecturesCSG WorkshopMay 2008 Jim Pepin CTO

  2. Disruptive change • Doubling (Moore’s Law or …) • Transistors • Multi-core • Disk capacity • New mass storage (flash, etc) • Parallel apps • Storage mgmt • Optics based networking

  3. Disruptive Change • Federated identity • Large V/O • Shared research/clinical spaces • Team science/academics • Paradigm shift • CI as a tool for all scholarship

  4. Disruptive Change • Lack of diversity in computing architectures • X64 has ‘won’ • Maybe IBM/Power exists at edges • Maybe Sun/SPARC at edges • This creates mono-culture • Dangerous • Innovation here in consumer space • Game boxes/phones drive here

  5. Network Futures • Optical Bypasses • Very high speed • Low friction • Low jitter • Facilities based • GLIF examples • RONs • Exchanges

  6. Network Futures • “Security” is driving researchers away from us • Are we the problem? • Where does ‘security’ belong? • How do we do VOs with two port internet? • Will we see our networks become ‘campus phone switch’ of the 2010s

  7. Data futures • Massive storage (really really big) • Object oriented (in some cases) • Preservation • Provenance • Distributed • Blur between data bases/file systems • Meta data

  8. New Operating Environments • Operating systems in network • Grids • ID management • But done poorly from integration view • How to build petascale single systems • Scaling applications is biggest problem • Training • “Cargo Cult” systems and applications

  9. New Operating Environments • 100s of TF at campus (but how to use it and build it on campus) • Tied into national petascale systems • All the problems on terragrid and VOs on steroids. • Network security friction points • Identity management • Non-homogenous operating environments

  10. Computation • Massively parallel • Many cores (doubling every 2-3 yrs) • Commodity parts • Massive collections of nodes with high speed interconnect • Heat and power density • Optical on chip technology • Legacy code scales performs poorly (or worse)

  11. Viz/remote access • SHDTV like quality (4k) • Enables true telemedicine and robotic surgery • Massive storage ties to this • Optiputer project is example (CALIT2) • Colab spaces with true haptic and visual presence. • Social sites are simple prototypes • Large screen applications and tele-presence

  12. Versus • Old Code • Much based on 360/VAX/Name it • Gaussian poster child • Vector optimized • Static IT models • Network defenders in IT hurt researchers • Researchers don’t play with others well • Condo model evolving

  13. Versus • Thinking this is just for science/engineer • Large data • Interactive applications • Social Science apps • Education outcomes at Clemson • Large data, statistics on huge scale • Shoah Foundation at USC • Massive data, networks, VO

  14. Vision/Sales Pitch • Access to various kinds of resources • Parallel high performance • Can be in condo (depends on politics) • Flexible node configurations • Large storage of various flavors • Viz • Leading edge networks

  15. “Clusters” • Large collection of multi-core • High performance interconnect • What makes cluster not just a bunch of nodes • Access to large data storage at parallel speeds • Lustre • SAM/QFS • PVFS • Ability to put in large memory nodes

  16. “Clusters” • Magic chips • GPUs, FPGAs etc • Botique today but gains can be enormous • Relation to desktops/local systems • How to integrate into national systems • Identity/security/networking • Viz clusters • Render agents • Large scale, friction free networking

  17. Storage Farms • Diverse data models • Large streams (easy to do) • Large number of small files (hard to do) • Integrate mandates (security, preservation) • Blur between institution data and personal/research • Storage spans external, campus, departmental,local • Speed of light matters

  18. Meaning of Life • Much closer relations needed to central IT • Networks/identity mgmt/security/policy • But not just ‘at scale’ • How to use the disruptive technologies • Core,GPUs,Cell,FPGA,Flash,optical networks • Disruptive software/services as well

  19. Meaning of Life • Build ecosystem of services • Some central, some local, some external • Not just computing, networks and storage • Our community has “gone global” • The campus is not a castle. • Earlier example of 8 social science faculty • We have thousands of communities • Can’t be one size fits all

More Related