1 / 20

Secure Peer-to-Peer File Sharing

Secure Peer-to-Peer File Sharing. Frans Kaashoek, David Karger, Robert Morris, Ion Stoica, Hari Balakrishnan http://www.pdos.lcs.mit.edu/chord MIT Laboratory for Computer Science. SFS: a secure global file system. client. Client. H21. One name space for all files Global deployment

rbranson
Télécharger la présentation

Secure Peer-to-Peer File Sharing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Secure Peer-to-Peer File Sharing Frans Kaashoek, David Karger, Robert Morris, Ion Stoica, Hari Balakrishnan http://www.pdos.lcs.mit.edu/chord MIT Laboratory for Computer Science

  2. SFS: a secure global file system client Client H21 • One name space for all files • Global deployment • Security over untrusted networks H21 Server Oxygen MIT /global/mit/kaashoek/sfs

  3. SFS results • Research: how to do server authentication? • Self-certifying pathnames • flexible key management • Complete system available • www.fs.net • 80,000 lines of C++ code? • Toolkit for file system research • System used inside and outside MIT • Ported to iPAQ

  4. New direction: peer-to-peer file sharing • How to build distributed systems without centrally-managed servers? • Many Oxygen technologies are peer-to-peer • INS, SFS/Chord, Grid • Chord is a new elegant primitive for building peer-to-peer applications

  5. Peer-to-peer sharing example • Internet users share music files • Share disk storage and network bandwidth • Nodes are less reliable than centrally-managed ones Internet

  6. Goal: Better Peer-to-Peer Storage • Lookup is the key problem • Lookup is not easy: • GNUtella scales badly • Freenet is imprecise • Chord lookup provides: • Good naming semantics and efficiency • Elegant base for layered features

  7. Lookup problem N1 N2 ? Consumer N3 Fetch(name) N4 Author Insert(name, document) N6 N5 • Current systems don’t scale: GNUtella, FreeNet

  8. Chord Architecture • Interface: • lookup(DocumentID)  NodeID, IP-Address • Chord consists of • Consistent Hashing • Small routing tables: log(n) • Fast join/leave protocol

  9. Consistent Hashing (0) D120 N105 D20 Circular 7-bit ID space N32 N90 D80 Example: Node 90 is the “successor” of document 80.

  10. Chord Uses log(N) “Fingers” (0) ½ ¼ Circular 7-bit ID space 1/8 1/16 1/32 1/64 1/128 N80 N80 knows of only seven other nodes.

  11. Chord Finger Table (0) N32’s Finger Table N113 33..33 N40 34..35 N40 36..39 N40 40..47 N40 48..63 N52 64..95 N70 96..31 N102 N102 N32 N85 N40 N80 N79 N52 N70 N60 Node n’s i-th entry: first node  n + 2i-1

  12. N70’s Finger Table Chord Lookup 71..71 N79 72..73 N79 74..77 N79 78..85 N80 86..101 N102 102..5 N102 6..69 N32 (0) N32’s Finger Table N113 N102 33..33 N40 34..35 N40 36..39 N40 40..47 N40 48..63 N52 64..95 N70 96..31 N102 N80’s Finger Table N32 N85 81..81 N85 82..83 N85 84..87 N85 88..95 N102 96..111 N102 112..15 N113 16..79 N32 N40 N80 N52 N79 N70 N60 Node 32, lookup(82): 32  70  80  85.

  13. New Node Join Procedure N20’s Finger Table (0) N113 N20 21..21 22..23 24..27 28..35 36..51 52..83 84..19 N102 N32 N40 N80 N52 N70 N60

  14. New Node Join Procedure (2) (0) N20’s Finger Table N113 N20 N102 21..21 N32 22..23 N32 24..27 N32 28..35 N32 36..51 N40 52..83 N52 84..19 N102 N32 N40 N80 N52 N70 N60 Node 20 asks any node for successor to 21, 22, …, 52, 84.

  15. New Node Join Procedure (3) (0) N20’s Finger Table N113 N20 N102 21..21 N32 22..23 N32 24..27 N32 28..35 N32 36..51 N40 52..83 N52 84..19 N102 D114..20 N32 N40 N80 N52 N70 N60 Node 20 moves documents from node 32.

  16. Chord Properties • Log(n) lookup messages and table space. • Log(1,000,000)  20 • Well-defined location for each ID • No search required • Natural load balance • Minimal join/leave disruption • Does not store documents…

  17. File sharing with Chord Client App (e.g. Browser) get(key) put(k, v) Key/Value Key/Value Key/Value lookup(id) Chord Chord Chord Server Client Server • Fault tolerance: store values at r successors • Hot documents: cache values along Chord lookup path • Authentication: self-certifying names

  18. Chord Status • Working Chord implementation • SFSRO file system layered on top • Prototype deployed at 12 sites around world • Understand design tradeoffs

  19. Open Issues • Network proximity • Malicious data insertion • Malicious Chord table information • Anonymity • Keyword search and indexing

  20. Chord Summary • Chord provides distributed lookup • Efficient, low-impact join and leave • Flat key space allows flexible extensions • Good foundation for peer-to-peer systems http://www.pdos.lcs.mit.edu/chord

More Related