1 / 27

Introduction

Introduction. Outline Statistical Multiplexing Network Architecture Performance Metrics (just a little). Our Journey. Peterson Text: Chapter 1 Chapters 4,5,6,8 (with Chapters 2,3,7 sprinkled in as needed) Why?. …. Building Blocks. Nodes: PC, special-purpose hardware… hosts

Télécharger la présentation

Introduction

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction Outline Statistical Multiplexing Network Architecture Performance Metrics (just a little) CS 332

  2. Our Journey • Peterson Text: • Chapter 1 • Chapters 4,5,6,8 (with Chapters 2,3,7 sprinkled in as needed) • Why? CS 332

  3. Building Blocks • Nodes: PC, special-purpose hardware… • hosts • Switches (host connected to at least two links that runs software that forwards data received on one link out on another). • Links: coax cable, optical fiber… • point-to-point • multiple access CS 332

  4. Switched Networks • A network can be defined recursively as... • two or more nodes connected by a link, or • two or more networks connected by two or more nodes Nodes on inside of cloud called switches, nodes on outside called hosts internetwork Cloud can Denote any Type of network: P2P, multi-access, switched Node between networks called router or gateway CS 332

  5. Routing • Just because we have physical connectivity between all hosts, doesn’t mean we have provided host-to-host connectivity • Need to be able to specify hosts with which we wish to communicate • I.e. each host needs an address (note that the address is really only a name – it need not relate in any way to the physical location of the host) • Routers and switches between hosts need to use address to decide how to forward messages • Routing: process of determining systematically how to forward messages toward the destination node based on the destination node’s address CS 332

  6. Not Just One Destination… • Unicast – single source, single destination • Broadcast – single source, all destinations (well, sort of) • Multicast – single source, whole group of destinations • Anycast?! • Why would you want this? CS 332

  7. A Key Idea • We can define a network recursively as a network of networks. • At bottom layer, it is implemented by some physical medium • At higher layers it is a “logical” network • Key issue: how do we assign addresses at each layer in a manner which allows us to efficiently route messages? CS 332

  8. Strategies • Circuit switching: carry bit streams • original telephone network • Connection is established before any data sent • Packet switching: store-and-forward messages • Internet (Why?) • Send discrete blocks of data from node to node (called a packet or message) CS 332

  9. 0 Switch 1 3 1 2 Switch 2 2 3 1 5 11 0 Host A 7 0 Switch 3 1 3 4 Host B 2 Virtual Circuit Switching • Explicit connection setup (and tear-down) phase • Subsequence packets follow same circuit • Sometimes called connection-oriented model • Analogy: phone call • Each switch maintains a VC table Advantages? Disadvantages? CS 332

  10. Host D Host E 0 Switch 1 Host F 3 1 Switch 2 2 Host C 2 3 1 0 Host A 0 Switch 3 Host B Host G 1 3 2 Host H Datagram Switching • No connection setup phase • Each packet forwarded independently • Sometimes called connectionless model • Analogy: postal system • Each switch maintains a forwarding (routing) table CS 332

  11. L1 R1 L2 R2 Switch 1 Switch 2 L3 R3 Multiplexing • Time-Division Multiplexing (TDM) • Frequency-Division Multiplexing (FDM) Note that in either technique, bandwidth can be wasted! CS 332

  12. Statistical Multiplexing • On-demand time-division (rather than in specific time slot) • Schedule link on a per-packet basis • Packets from different sources interleaved on link • Buffer packets that are contending for the link • Buffer (queue) overflow is called congestion Fairly allocating link capacity and dealing with congestion are key issues here! … CS 332

  13. IPC Abstractions • Stream-Based • video: sequence of frames • video applications • on-demand video • video conferencing • Request/Reply • distributed file systems • digital libraries (web) CS 332

  14. Layering • Use abstractions to hide complexity • Abstraction naturally lead to layering • Alternative abstractions at each layer Application programs Request/reply Message stream channel channel Host-to-host connectivity Hardware CS 332

  15. Protocols • Building blocks of a network architecture • Each protocol object has two different interfaces • service interface: operations on this protocol • peer-to-peer interface: messages exchanged with peer • Term “protocol” is overloaded • specification of peer-to-peer interface • module that implements this interface CS 332

  16. Interfaces Host1 Host2 Service High-level High-level interface object object Protocol Protocol Peer-to-peer interface CS 332

  17. Protocol Machinery • Protocol Graph • most peer-to-peer communication is indirect • peer-to-peer is direct only at hardware level Host 2 Host 1 Digital Digital Video Video File File library library application application application application application application RRP MSP RRP MSP HHP HHP RRP: request/reply protocol MSP: msg stream protocol

  18. Machinery (cont) • Multiplexing and Demultiplexing (demux key) • Encapsulation (header/body) Host 1 Host 2 Application Application program program Data Data RRP RRP RRP Data RRP Data HHP HHP HHP RRP Data CS 332

  19. FTP HTTP NV TFTP UDP TCP IP … NET NET NET 2 1 n Internet Architecture • Defined by Internet Engineering Task Force (IETF) • Hourglass Design • Application vs Application Protocol (FTP, HTTP) CS 332

  20. ISO Architecture End host End host Note transport layer is “end-to-end” Application Application Presentation Presentation Session Session Transport Transport Network Network Network Network Data link Data link Data link Data link Physical Physical Physical Physical One or more nodes within the network CS 332

  21. Performance Metrics • Bandwidth (throughput) “width” of pipe • data transmitted per time unit • link versus end-to-end • notation • KB = 210 bytes • Mbps = 106 bits per second • Latency (delay) “length” of pipe • time to send message from point A to point B • OR time for bit to travel from point A to point B • Also “link latency” • one-way versus round-trip time (RTT) • components Latency = Propagation + Transmit + Queue Propagation = Distance / c Transmit = Size / Bandwidth CS 332

  22. Bandwidth versus Latency • Relative importance • Infinite bandwidth CS 332

  23. Protocol Implementation Issues • Which process model? • Process-per-protocol model • Each protocol implemented by separate process (thread) • Sometimes logically “cleaner” – one protocol, one process • Context switch required at each level of protocol graph • Process-per-message model • Each message handled by a single process with each protocol a separate procedure that invokes the subsequent protocol • Order of magnitude more efficient (procedure call much more efficient than context switch) CS 332

  24. Protocol Implementation Issues • Service Interface relationship with process model • If high-level protocol invokes send() on lower level protocol, it has message in hand so no big deal • If high-level protocol invokes receive() on lower level protocol, it must wait for receipt of message, which basically forces a context switch. • No big deal if app directly calls network subsystem, but big deal if it occurs at each layer of protocol stack • Cure: low level protocol does an upcall (a procedure call up the stack) to deliver message to higher level CS 332

  25. Protocol Implementation Issues • Message Buffers • In socket API, application process provides buffers for both outbound and incoming messages. This forces top most protocol to copy messages from/to network buffers. • Copying data from one buffer to another is one of the most expensive operations a protocol implementation can perform. • Memory is not getting fast as quickly as processors are • Solution: Rather than copying from buffer to buffer at each layer of protocol stack, network subsystem defines a message abstraction shared by all protocols in the protocol graph. • Can be viewed as string manipulations with pointers Note: you can’t move data any faster than the slowest copy operation! CS 332

  26. Implementation • Are you using streams or request/reply, and what are the ramifications? • What operating system are you coding on/for and where is the sockets library? • What languages can you use, in theory? • Why would you wish to use specific languages? • What will you have to do in your first assignment? CS 332

  27. Implementation • Port Numbers • Solaris: reserved (513-1023), ephemeral(32768-65535) • BSD:reserved(1-1023), ephemeral(1024-5000), nonprivileged servers(5001-65535) • IANA: well known (1-1023), registered(1024-49151), dynamic (49152-65535) • Endian issues • Compiler flags • Solaris –lsocket –lnsl • Linux no flags required • Specifying command line arguments • Null characters in strings? CS 332

More Related