1 / 49

The Personal Petabyte The Enterprise Exabyte

The Personal Petabyte The Enterprise Exabyte. Jim Gray Microsoft Research Presented at IIST Asilomar 10 December 2003 http://research.microsoft.com/~gray/talks. Outline. History Changing Ratios Who Needs a Petabyte?. Thesis: in 20 years, Personal Petabyte will be affordable.

varuna
Télécharger la présentation

The Personal Petabyte The Enterprise Exabyte

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Personal PetabyteThe Enterprise Exabyte Jim Gray Microsoft Research Presented at IIST Asilomar 10 December 2003 http://research.microsoft.com/~gray/talks

  2. Outline • History • Changing Ratios • Who Needs a Petabyte? Thesis: in 20 years, Personal Petabyte will be affordable. Most personal bytes will be video. Enterprise Exabytes will be sensor data.

  3. An Early Disk • Phaistos Disk: • 1700 BC • Minoan (Cretian, Greek) • No one can read it 

  4. Early Magnetic Disk 1956 • IBM 305 RAMAC • 4 MB • 50x24” disks • 1200 rpm • 100 ms access • 35k$/y rent • Included computer & accounting software(tubes not transistors)

  5. 10 years later (1966 Illiac) 30 MB 1.6 meters

  6. Or 1970 IBM 2314 at 29MB 970

  7. Seagate 5 ¼” 5 MB Fujitsu Eagle 10” 450MB History: 1980 Winchester

  8. In the beginning there was the Paramagnetic Limit: 10Gbpsi Limit keeps growing(now ~ 200Gbpsi) Mark H. Kryder, Seagate Future Magnetic Recording TechnologiesFAST 2001@Monterey PDF. apologizes: “Only 100x density improvement, then we are out of ideas” That’s 20 TB desktop 4 TB laptop! The MAD FutureTerror Bytes8

  9. Outline • History • Changing Ratios • Disk to Ram • DASD is Dead • Disk space is free • Disk Archive-Interchange • Network faster than disk • Capacity, Access • TCO == people cost • Smart disks happened • The entry cost barrier • Who Needs a Petabyte?

  10. 10x better access time 10x more bandwidth 100x more capacity Data 25x cooler (1Kaps/20MB vs 1Kaps/500MB) 4,000x lower media price 20x to 100x lower disk price Scan takes 10x longer (3 min vs 45 min) RAM/disk media price ratio changed 1970-1990 100:1 1990-1995 10:1 1995-1997 50:1 today ~ 1$/GB disk 200:1 200$/GB dram Storage Ratios Changed

  11. 100:1 10 years Price_Ram_TB(t+10) = Price_Disk_TB(t)Disk Data Can Move to RAM in 10 years • Disk ~100x cheaper than RAM per byte • Both get 100x bigger in 10 years. • Move data to main memory • Seems: RAM/Disk bandwidth ~100:1

  12. 300 GB 50 MB/s DASD (direct access storage device) is Dead • accesses got cheaper • Better disks • Cheaper disks! • Disk access/bandwidth: the scarce resource • 2003: 100 minute Scan 1990: 5 minute Scan • Sequential bandwidth50x faster than randomRandom Scan 3 days • Ratio will get 10x worse in 10 years100x more capacity, 10x more bandwidth. • Invent ways to trade capacity for bandwidth Use the capacity without using bandwidth.

  13. Disk Space is “free”Bandwidth & Accesses/sec are not • 1k$/TB going to 100$/TB • 20 TB disks on the (distant) horizon • 100x density, • Waste capacity intelligently • Version everything • Never delete anything • Keep many copies • Snapshots • Mirrors (triple and geoplex) • Cooperative caching (Farsite and OceanStore) • Disk Archive

  14. Disk as Archive-Interchange • Tape is archive / interchange / low cost • Disc now competitive in all 3 categories • What format? Fat? CDFS?.. • What tools? • Need the software to do disk-based backup/restore • Commonly snapshot (multi-version FS) • Radical: peer-to-peer file archiving • Many researchers looking at thisOceanStore, Farsite, others…

  15. Disk vs NetworkNow the Network is Faster (!) • Old days: • 10 MBps disk, low cpu cost ( 0.1 ins/b) • 1 MBps net, huge cpu cost (10 ins/b) • New days: • 50 MBps disk, low cpu cost • 100 MBps net, low cpu cost (toe, rdma) • Consequence: • You can remote disks. • Allows consolidation • Aggregate (bisection) bandwidth still a problem.

  16. 1980 rules-of-thumb: 1 systems programmer per mips 1 data admin per 10GB 800 sys programmers + 4 data admins for your laptop Sometimes it must seem like that but… Today one data admin per 1 TB ... 300 TB Depending on process and data value. Automate everything Use redundancy to mask (and repair) problems. Save people, spend hardware Storage TCO == people time

  17. Smart Disks Happened Disk appliances are here: Cameras Games PVRs FileServers Challenge: entry price

  18. The Entry Cost BarrierConnect the Dots • Consumer electronics want low entry cost • 1970: 20,000$ • 1980: 2,000$ • 2000: 200$ • 2010 20$ • If magnetics can’t do this, another technology will. • Think: copiers, hydraulic shovels,… WantedToday ln(price) Time

  19. Yotta Zetta Exa Peta Tera Giga Mega Kilo Outline • History • Changing Ratios • Who Needs a Petabyte? • Petabyte for 1k$ in 15-20 years • Affordable but useless • How much information is there? • The Memex vision • MyLifeBits • The other 20% (enterprise storage) We are here

  20. A Bleak Future: The ½ Platter Society? • Conclusion from Information Storage Industry ConsortiumHDDApplications Roadmap Workshop: • “Most users need only 20GB” • We are heading to a ½ platter industry. • 80% of units and capacity is personal disks(not enterprise servers). • The end of disk capacity demand. • A zero billion dollar industry?

  21. Try to fill a terabyte in a year Petabyte volume has to be some form of video.

  22. Growth Comes From NEW Apps • The 10M$ computer of 1980 costs 1k$ today • If we were still doing the same things,IT would be a 0 B$/y industry • NEW things absorb the new capacity • 2010 Portable ? • 100 Gips processor • 1 GB RAM • 1 TB disk • 1 Gbps network • Many form factors

  23. The Terror Bytes are Here • 1 TB costs 1k$ to buy • 1 TB costs 300k$/y to own • Management & curation are expensive • (I manage about 15TB in my spare time. no, I am not paid 4.5M$/y to manage it) • Searching 1TB takes minutes or hours or days or.. • I am Petrified by Peta Bytes • But… people can “afford” them so, we have lots to do – Automate! Yotta Zetta Exa Peta Tera Giga Mega Kilo We are here

  24. How much information is there? Yotta Zetta Exa Peta Tera Giga Mega Kilo Everything! Recorded • Soon everything can be recorded and indexed • Most bytes will never be seen by humans. • Data summarization, trend detection anomaly detection are key technologies See Mike Lesk: How much information is there: http://www.lesk.com/mlesk/ksg97/ksg.html See Lyman & Varian: How much information http://www.sims.berkeley.edu/research/projects/how-much-info/ All Books MultiMedia All books (words) .Movie A Photo A Book 24 Yecto, 21 zepto, 18 atto, 15 femto, 12 pico, 9 nano, 6 micro, 3 milli

  25. MemexAs We May Think, Vannevar Bush, 1945 “A memex is a device in which an individual stores all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility” “yet if the user inserted 5000 pages of material a day it would take him hundreds of years to fill the repository, so that he can be profligate and enter material freely”

  26. Why Put Everything in Cyberspace? Low rent min $/byte Shrinks time now or later Shrinks space here or there Automate processing knowbots Point-to-Point OR Broadcast Immediate OR Time Delayed Locate Process Analyze Summarize

  27. How Will We Find Anything? • Need Queries, Indexing, Pivoting, Scalability, Backup, Replication,Online update, Set-oriented access • If you don’t use a DBMS, you will implement one! • Simple logical structure: • Blob and link is all that is inherent • Additional properties (facets == extra tables)and methods on those tables (encapsulation) • More than a file system • Unifies data and meta-data SQL ++DBMS

  28. MyLifeBits The guinea pig • Gordon Bell is digitizing his life • Has now scanned virtually all: • Books written (and read when possible) • Personal documents (correspondence, memos, email, bills, legal,0…) • Photos • Posters, paintings, photo of things (artifacts, …medals, plaques) • Home movies and videos • CD collection • And, of course, all PC files • Now recording: phone, radio, TV (movies), web pages… conversations • Paperless throughout 2002. 12” scanned, 12’ discarded. • Only 30 GB!!! Excluding digital videos • Video is 2+ TB and growing fast

  29. Capture and encoding

  30. I mean everything

  31. gbell wag: 67 yr, 25Kday lifea Personal Petabyte 1PB

  32. 80% of data is personal / individual. But, what about the other 20%? • Business • Wall Mart online: 1PB and growing…. • Paradox: most “transaction” systems < 1 PB. • Have to go to image/data monitoring for big data • Government • Government is the biggest business. • Science • LOTS of data.

  33. Information Avalanche • Both • better observational instruments and • Better simulations are producing a data avalanche • Examples • Turbulence: 100 TB simulation then mine the Information • BaBar: Grows 1TB/day 2/3 simulation Information 1/3 observational Information • CERN: LHC will generate 1GB/s 10 PB/y • VLBA (NRAO) generates 1GB/s today • NCBI: “only ½ TB” but doubling each year, very rich dataset. • Pixar: 100 TB/Movie Image courtesy of C. Meneveau & A. Szalay @ JHU

  34. Q: Where will the Data Come From?A: Sensor Applications • Earth Observation • 15 PB by 2007 • Medical Images & Information + Health Monitoring • Potential 1 GB/patient/y  1 EB/y • Video Monitoring • ~1E8 video cameras @ 1E5 MBps  10TB/s  100 EB/y filtered??? • Airplane Engines • 1 GB sensor data/flight, • 100,000 engine hours/day • 30PB/y • Smart Dust: ?? EB/y http://robotics.eecs.berkeley.edu/~pister/SmartDust/ http://www-bsac.eecs.berkeley.edu/~shollar/macro_motes/macromotes.html

  35. DataGrid Computing • Store exabytes twice (for redundancy) • Access them from anywhere • Implies huge archive/data centers • Supercomputer centers become super data centers • Examples: Google, Yahoo!, Hotmail,BaBar, CERN, Fermilab, SDSC, …

  36. Outline • History • Changing Ratios • Who Needs a Petabyte? Thesis: in 20 years, Personal Petabyte will be affordable. Most personal bytes will be video. Enterprise Exabytes will be sensor data.

  37. Bonus Slides

  38. 8 web front end 4x8cpu+4GB DB 18TB triplicate disksClassic SAN(tape not shown) ~2M$ Works GREAT! 2000…2004 Now replaced by.. TerraServer V4 WEB x8 SAN SQL x4

  39. KVM / IP TerraServer V5 • Storage Bricks • “White-box commodity servers” • 4tb raw / 2TB Raid1 SATA storage • Dual Hyper-threaded Xeon 2.4ghz, 4GB RAM • Partitioned Databases (PACS – partitioned array) • 3 Storage Bricks = 1 TerraServer data • Data partitioned across 20 databases • More data & partitions coming • Low Cost Availability • 4 copies of the data • RAID1 SATA Mirroring • 2 redundant “Bunches” • Spare brick to repair failed brick 2N+1 design • Web Application “bunch aware” • Load balances between redundant databases • Fails over to surviving database on failure • ~100K$ capital expense.

  40. SpeedMbps Rent$/month $/TBSent Context $/Mbps Time/TB 0.04 40 1,000 3,086 6 years Home phone Home DSL 0.6 70 117 360 5 months T1 1.5 1,200 800 2,469 2 months T3 43 28,000 651 2,010 2 days OC3 155 49,000 316 976 14 hours OC 192 9600 1,920,000 200 617 14 minutes 100 Mpbs 100 1 day Gbps 1000 2.2 hours How Do You Move A Terabyte? Source: TeraScale Sneakernet, Microsoft Research, Jim Gray et. all

  41. Key Observationsfor Personal StoreAnd for Larger Stores. • Schematized storage can help organization and search. • Schematized XML data sets a universal way exchange data answers and new data. • If data are objects, thenneed standard representation for classes & methods.

  42. Longhorn - For Knowledge Workers • Simple (Self-*): auto install/manage/tune/repair. • Schema: data carries semantics • Search: find things fast (driven by schema) • Sync: “desktop state” anywhere • Security: (Palladium) -- trustworthy - privacy - trustworthy (virus, spam,..) - DRM (protect IP) • Shell: task-based UI (aka activity-based UI) • Office-Longhorn • Intelligent documents • XML and Schemas

  43. How Do We Represent It To The Outside World?Schematized Storage • <?xml version="1.0" encoding="utf-8" ?> • -<DataSet xmlns="http://WWT.sdss.org/"> • -<xs:schema id="radec" xmlns="" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:msdata="urn:schemas-microsoft-com:xml-msdata"> • <xs:element name="radec" msdata:IsDataSet="true"> • <xs:element name="Table"> • <xs:elementname="ra" type="xs:double" minOccurs="0" /> • <xs:elementname="dec" type="xs:double" minOccurs="0" /> • … • -<diffgr:diffgram xmlns:msdata="urn:schemas-microsoft-com:xml-msdata" xmlns:diffgr="urn:schemas-microsoft-com:xml-diffgram-v1"> • -<radec xmlns=""> • -<Table diffgr:id="Table1" msdata:rowOrder="0"> • <ra>184.028935351008</ra> • <dec>-1.12590950121524</dec> • </Table> • … • -<Table diffgr:id="Table10" msdata:rowOrder="9"> • <ra>184.025719033547</ra> • <dec>-1.21795827920186</dec> • </Table> • </radec> • </diffgr:diffgram> • </DataSet> • File metaphor too primitive: just a blob • Table metaphor too primitive: just records • Need Metadata describing data context • Format • Providence (author/publisher/ citations/…) • Rights • History • Related documents • In a standard format • XML and XML schema • DataSet is great example of this • World is now defining standard schemas schema Data or difgram

  44. There Is A Problem Niklaus Wirth: Algorithms + Data Structures = Programs • GREAT!!!! • XML documents are portable objects • XML documents are complex objects • WSDL defines the methods on objects (the class) • But will all the implementations match? • Think of UNIX or SQL or C or… • This is a work in progress.

  45. Disk Storage Cheaper Than Paper • File Cabinet (4 drawer) 250$Cabinet: Paper (24,000 sheets) 250$ Space (2x3 @ 10€/ft2) 180$ Total 700$ 0.03 $/sheet3 pennies per page • Disk: disk (250 GB =) 250$ ASCII: 100 m pages 2e-6 $/sheet(10,000x cheaper)micro-dollar per page Image: 1 m photos 3e-4 $/photo (100x cheaper)milli-dollar per photo • Store everything on diskNote: Disk is 100x to 1000x cheaper than RAM

  46. Data Analysis • Looking for • Needles in haystacks – the Higgs particle • Haystacks: Dark matter, Dark energy • Needles are easier than haystacks • Global statistics have poor scaling • Correlation functions are N2, likelihood techniques N3 • As data and computers grow at same rate, we can only keep up with N logN • A way out? • Discard notion of optimal (data is fuzzy, answers are approximate) • Don’t assume infinite computational resources or memory • Requires combination of statistics & computer science

  47. Analysis and Databases • Much statistical analysis deals with • Creating uniform samples – • Data filtering • Assembling relevant subsets • Estimating completeness • Censoring bad data • Counting and building histograms • Generating Monte-Carlo subsets • Likelihood calculations • Hypothesis testing • Traditionally these are performed on files • Most of these tasks are much better done inside DB • Bring Mohamed to the mountain, not the mountain to him

  48. You can GREP 1 MB in a second You can GREP 1 GB in a minute You can GREP 1 TB in 2 days You can GREP 1 PB in 3 years. Oh!, and 1PB ~5,000 disks At some point you need indices to limit searchparallel data search and analysis This is where databases can help You can FTP 1 MB in 1 sec You can FTP 1 GB / min (= 1 $/GB) … 2 days and 1K$ … 3 years and 1M$ Data Access is hitting a wallFTP and GREP are not adequate

  49. Smart Data(active databases) • If there is too much data to move around, take the analysis to the data! • Do all data manipulations at database • Build custom procedures and functions in the database • Automatic parallelism • Easy to build-in custom functionality • Databases & Procedures being unified • Example temporal and spatial indexingpixel processing, … • Easy to reorganize the data • Multiple views, each optimal for certain types of analyses • Building hierarchical summaries are trivial • Scalable to Petabyte datasets

More Related