1 / 157

Computer Networks An Open Source Approach

Computer Networks An Open Source Approach. Chapter 1: Fundamentals. Chapter 1: Fundamentals. 1. Content. 1.1 Requirements for computer networking 1.2 Underlying principles 1.3 The Internet architecture 1.4 Open source implementations 1.5 Book roadmap: a packet’s life 1.6 Summary.

willa
Télécharger la présentation

Computer Networks An Open Source Approach

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Computer NetworksAn Open Source Approach Chapter 1: Fundamentals Chapter 1: Fundamentals 1

  2. Content 1.1 Requirements for computer networking 1.2 Underlying principles 1.3 The Internet architecture 1.4 Open source implementations 1.5 Book roadmap: a packet’s life 1.6 Summary Chapter 1: Fundamentals 2

  3. 1.1 Requirements for Computer Networking • Definition of a computer network • A shared platform through which a large number of users and applications communicate with each other • Three requirements for data communications • Connectivity: who and how to connect? • Scalability: how many to connect? • Resource sharing: how to utilize the connectivity? • Packet switching in datacom • Circuit switching in telecom

  4. 1.1.1 Connectivity: Node, Link, Path • Another definition of a computer network (connectivity version) • A connected platform constructed from a set of nodes and links, where any two nodes can reach each other through a path consisting of a sequence of nodes and links Chapter 1: Fundamentals 4

  5. Node: Host or Intermediary • Host • End-point where users or applications reside • Mainframe, workstation, desktop, hand-held, set-top-box, etc. • Act as client or server, or both • Intermediary • Device to interconnect hosts • Hub, switch, router, gateway, etc. • Wire-speed processing is a goal • Embedded system with special ICs for speedup or cost reduction Chapter 1: Fundamentals 5

  6. Link: Point-to-Point or Broadcast • Point-to-point: connects exactly two nodes with one on each end • Nodes transmit as they wish if it is a full-duplex link (simultaneous bidirectional) • Nodes take turns to transmit if it is a half-duplex link (one-at-a-time bidirectional) • Nodes utilize two links to transmit, one for each direction, if it is a simplex link (unidirectional communication only) • Usually WANs (Wide Area Network) Chapter 1: Fundamentals 6

  7. Broadcast: connects more than two attached nodes • Nodes attached to a broadcast link need to contend for the right to transmit (multiple access) • Usually LANs (Local Area Network) • This is because the multiple access methods used in broadcast links are usually more efficient over short distances Chapter 1: Fundamentals 6

  8. Wired or Wireless • Wired • Twisted pair • Two copper lines twisted together for better immunity to noise • Widely used as the access lines in the plain old telephone system (POTS) and LANs such as Ethernet • A Category-5 (Cat-5) twisted pair, with a thicker gauge than the twisted pair for in-home POTS wiring, can carry • 10 Mbps over a distance of several kilometers to 1 Gbps or • Higher over 100 meters or so Chapter 1: Fundamentals 7

  9. Coaxial cable • Separate a thicker copper line from a thinner nested copper wire with plastic shield • Suitable for long-haul transmissions such as cable TV distribution of over 100 6-MHz TV channels for an area spanning 40 km wide • Through cable modems, some channels each can be digitized at the rate of 30 Mbps for data, voice, or video services • Fiber optics • Has large capacity and it can carry signals for much longer distances • Fiber optic cables are used mostly for backbone networks (Gbps to Tbps) and sometimes for local networks (100 Mbps to 10 Gbps)

  10. Wireless • Radio (104~108 Hz), microwave (108~1011 Hz), infrared (1011~1014 Hz) , and beyond (ultra-violet, X ray, Gamma ray) in the increasing order of their transmission frequency • A low-frequency (below several GHz) wireless link is usually a broadcast one, which is omnidirectional • A high-frequency (over tens of GHz) wireless link could be point-to-point, which is more directional Chapter 1: Fundamentals 7

  11. Wireless data communication systems (operating within 800 MHz to 2 GHz microwave spectrum) • Wireless LANs (54 Mbps to 600 Mbps data transfer rate within a 100-m radius) • General Packet Radio Service (GPRS) (128 kbps within a few km) • 3G (3rd Generation, 384 kbps to several Mbps within a few km) • Bluetooth (several Mbps within 10 m)

  12. Popular Wired and Wireless Link Technologies Table 1.1 Popular Wired and Wireless Link Technologies Chapter 1: Fundamentals 8

  13. 802.11ac 在單一空間流(spatial streams)中使用不同頻寬 bandwidth與不同調變 modulation 之理論傳輸速率 Mbps • 若802.11ac 使用最高 160 MHz bandwidth,與最佳之調變 256-QAM,在8個空間流(spatial streams)之情況下,最高可達 6.93 Gbps (=8 x 866.7 Mbps)之理論傳輸速率 GI: Guard Interval (timing between wireless frames)

  14. Path: Routed or Switched? • Any attempt to connect two remote nodes must first find a path, a sequence of concatenated intermediate links and nodes, between them • A path can be either routed or switched 16

  15. Routed Path • When node A wants to send messages to node B, the messages are routed if they are transferred through non-preestablished and independently selected paths, perhaps through different paths • By routing, the destination address of the message is matched against a “routing” table to find the output link for the destination • This matching process usually requires several table-lookup operations, each of which costs one memory access and one address comparison • Routed path is a stateless or connectionless concatenation of intermediate links and nodes • The Internet is stateless and connectionless

  16. Switched path • A switched path requires the intermediate nodes to establish the path and record the state information of this path in a “switching” table before a message can be sent • Messages to be sent are then attached with an index number which points to some specific state information stored in the “switching” table • Switching a message then becomes easy indexing into the table with just one memory access • Switching is much faster than routing but at the cost of setup overhead • Switched path is a stateful or connection-oriented concatenation

  17. ATM (Asynchronous Transfer Mode) • Has all its connections switched • Before the data begins to flow, a connection along a path between the source and the destination has to be established and memorized at all the intermediate nodes on the path • POTS (Plain Old Telephone Service) has all telephone calls switched

  18. 1.1.2 Scalability: Number of Nodes • Another definition of a computer network (scalability version) • Being able to connect 10 nodes is totally different from being able to connect millions of nodes • What could work on a small group does not necessarily work on a huge group, we need a scalable method to achieve the connectivity • A computer network, from the aspect of scalability, must offer “a scalable platform to a large number of nodes so that each node knows how to reach any other node” Chapter 1: Fundamentals 12

  19. Super Supergroup 4,294,967,296 65,536 65,536 256 256 256 256 Supergroup X65,536 Group x256 x256 Hierarchy of Nodes • Recursive clustering method creates a manageable tree-like hierarchical structure • Group • Each consisting of a small number of nodes • Supergroup • If the groups is very large, we can further cluster these group into a number of supergroup • Super-supergroup • If the supergroups is very large, can be further clustered into super-supergroup Figure 1.1 Hierarchy of nodes grouping of billions of nodes in a 3-level hierarchy. Chapter 1: Fundamentals 13

  20. LAN, MAN, WAN • LAN: Local Area Network • It would be natural to form a bottom-level group with the nodes which reside within a small geographical area, say of several square kilometers • The network that connects the small bottom-level group is called a local area network (LAN) • For a group of size 256, it would require at least 256 (for a ring-shaped network) and at most 32,640 (256 x 255 / 2) point-to-point links (for a fully connected mesh) to establish the connectivity 22

  21. Since it would be tedious to manage this many links in a small area, broadcast links thus come to play the dominant role here • By attaching all 256 nodes to a single broadcast link (with a bus, ring, or star topology), we can easily achieve and manage their connectivity

  22. MAN: Metropolitan Area Network • The application of a single broadcast link can be extended to a geographically larger network, say metropolitan area network (MAN), to connect remote nodes or even LANs • MANs usually have a ring topology so as to construct dual buses for fault tolerance to a link failure

  23. WAN: Wide Area Network • Broadcast ring arrangement has put limitations on the degree of fault tolerance and on the number of nodes or LANs a network could support • Point-to-point links fit in naturally for unlimited, wide area connectivity • A wide area network (WAN) usually has a mesh topology due to the randomness in the locations of geographically dispersed network sites • A tree topology is inefficient in WAN’s case because in a tree network, all traffic has to ascend toward the root and at some branch descend to the destination node • If the traffic volume between two leaf nodes is huge, a tree network might need an additional point-to-point link to connect them directly, which then creates a loop in the topology and turns the tree into a mesh 25

  24. An internetwork made of two LANs and one WAN

  25. A heterogeneous network made of WANs and LANs

  26. 1.1.3 Resource Sharing • Yet another definition of a computer network (resource sharing version) • A shared platform where the capacities of nodes and links are used to carry communication messages between nodes • How to share? • Store-and-forward • Put buffer space at nodes can absorb most congestion caused by temporary data bursts • Forward data message along the path toward their destination • Packetization: header information attached to the messages to form packets • Queuing: network of queues • At node: queuing/buffering and processing • At link: queuing/buffering, transmission, propagation Chapter 1: Fundamentals 15

  27. Packet Switching vs. Circuit Switching • Packet switching • Where message in data traffic are chopped into packets or datagrams, stored at the buffer queue of each intermediate node on the path, and forwarded along the path toward their destination • This mode of store-and-forward resource sharing is also called datagram switching Chapter 1: Fundamentals 16

  28. Circuit switching • Which provides stable resource supplies and thus can sustain quality in a continuous data stream such as video or audio signals • Not suitable for data communications where interactive or file-transfer applications Chapter 1: Fundamentals 16

  29. Packetization • To send out a message, some header information must be attached to the message to form a packet so that the network knows how to handle it • The message itself is then called the payload of the packet • The header information usually contains the source and destinationaddress and many other fields to control the packet delivery process Chapter 1: Fundamentals 17

  30. How large can packets and payload be ? • It depends on the underlying link technologies • A link has its limit on the packet length, which could cause the sending node to transmit over the link • The packet header would tell the intermediate nodes and the destination node how to deliver and how to reassemble the packets • With the header, each packet can be processed either totally independently or semi-independently when traversing through the network Chapter 1: Fundamentals 18

  31. H H H Packetization a MessageDecomposing a message into packets with added header message packet with header Figure 1.2 Packetization: fragment a message into packets with added headers. Chapter 1: Fundamentals 19

  32. Queuing at a Node and a Link • When a packet arrives at a node, it joints a buffer queue with other packet arrivals, waiting to be proposed by the processor in the node • Once the packet moves to the front of the queue, it gets served by the processor, which figure out how to process the packet according to the header fields Chapter 1: Fundamentals 20

  33. If the node processor decides to forward it to another data-transfer port, the packet then joint another buffer queue waiting to be transmitted by the transmitter of that port • When a packet is being transmitted over a link, it takes some time to propagate the packet’s data form one side to the other side of the link, be it point -to-point or broadcast • If the packet traverses through a path with 10 nodes and hence 10 links, this process will be repeated 10 times

  34. node packets buffer processor propagation link packets buffer transmitter Queuing at a Node and a Link Figure 1.3 Queuing at a node and a link. Chapter 1: Fundamentals 21

  35. Principle in Action: Datacom vs. Telecom • Datacom • Data communications or computer networking • Telecom • Telecommunications • Datacom vs. Telecom • Supported applications • Multiple vs. single • Way to share resources • Packet switching vs. circuit switching • Performance issues • Buffer vs. buffer-less • Throughput / latency / jitter / loss vs. blocking / dropping Chapter 1: Fundamentals 22

  36. 1.2 Underlying Principles • Categories of principles • Performance • Governs the quality of services of packet switching • Operations • Details the types of mechanisms needed for packet handling • Interoperability • Defines what should be put into standard protocols and algorithms, and what should not

  37. Performance measures • Bandwidth (hardware capacity) • Offered load (input traffic) • Throughput (the output traffic as compared to the offered load of input traffic) • Latency (delay) • Jitter (latency variation) • Packet loss (due to congestion or error) • Operations at control plane • Routing • Traffic and bandwidth allocation Chapter 1: Fundamentals 23

  38. Operations at data plane • Forwarding • Congestion control • Error control • Quality of services (QoS) • Interoperability • Standard protocols and algorithms • Implementation-dependent • Unlike a protocol specification, there exists much flexibility in a protocol implementation • Not every part of an algorithms at the control and the data plane needs to be standardized Chapter 1: Fundamentals 23

  39. 1.2.1 Performance Measures • Performance results of a system come either from mathematical analysis or system simulationsbefore the real system is implemented, or from experiments on a test bed after the system has been implemented • How a system performs, as perceived by a user, depends on three things • The hardware capacity of the system • The offeredloador input traffic to this system • The internal mechanisms or algorithms built into this system to handle the offered load Chapter 1: Fundamentals 24

  40. The hardware capacity of the system • The hardware capacity is often called bandwidth • The referred hardware can be a node, link, path, or even a network as whole • The offeredloador input traffic to this system • The offered loadof a system may vary, from light load, normal operational load, to extremely heavy load (say wire-speed stress load) Chapter 1: Fundamentals 25

  41. The internal mechanisms or algorithms built into this system to handle the offered load • There should be a close match between bandwidth and offered load, if the system is to say in a stable operation • For packet switching, throughput (the output traffic as compared to offered load of input traffic) appears to be the performance measure that concerns us most, though other measures such as latency(often called delay) , latency variation(often called jitter), and loss are also important Chapter 1: Fundamentals 25

  42. Transmission Time and “Length” of a Bit • Bandwidth • The maximum amount of data that can be handled by a system in a second • The number of bitstransmitted and contained in the distance propagated by the signal in one second Chapter 1: Fundamentals 26

  43. Example • Since the speed of light in a medium is fixed at around 2 × 108 m/sec, higher bandwidth means more bitscontained in 2 × 108 m • For a transcontinental link of 6000 miles (9600 km) with a bandwidth of 10 Gbps • Propagation delay= 9600 km / (2 × 108 m/sec) = 48 ms • Maximum number of bits per second contained in the link = [9600 km / (2 × 108 m)] × 10 Gbps = 480 Mbps Chapter 1: Fundamentals 27

  44. The “width” of a transmitted bit propagating on a link varies according to the link bandwidth • Example: the bit width in a 10-Mbps link is • 1/(10 × 106) = 0.1μs in time, or • 0.1 μs × 2 × 108 m/sec = 20m, in length • The signal wave of one bit actually occupies 20 meters in the link Speed of light in a medium = 2 × 108 m/sec 0.1μs in time, or 20m in length Figure 1.4 Bit width in time and length for a 10-Mbps link where the transmitted dataare encoded by the widely used Manchester code. Chapter 1: Fundamentals 28

  45. The offered load or input traffic can be normalized with respect to the bandwidth and used to indicate the utilization or how busy the system is • For a 10-Mbps link, an offered load of 5Mbps means a normalized load of 0.5, meaning the link would be 50% busy on the average • It is possible for the normalized load to exceed 1, through it would put the system in an unstable state Chapter 1: Fundamentals 29

  46. The throughput or output traffic may or may not be the same as the offered load as shown in Figure 1.5. • Ideally, they should be the same before the offered load reaches the bandwidth (see cure A). Beyond that, the throughputconverges to the bandwidth • In reality, the throughput might be lower than the offered load (see cure B) due to buffer overflow (in a node or link) or collisions (in a broadcast link) even before the offered load reaches the bandwidth • In link with uncontrolled collisions, the throughput may drop down to zero as the offered load continues to increase (see cure C) Figure 1.5 Bandwidth, offered, load, and throughput. Chapter 1: Fundamentals 30

  47. Performance MeasuresLatency in a Node • Latency (delay) in a node= queuing + processing • In queuing theory system • If both packet inter-arrival time and packet service time are exponentially distributed and the former is larger than the latter, plus infinite buffer size, the mean latency is the inverse of the difference between bandwidth and offered load • Mean latency = 1 / (bandwidth – offered load) Chapter 1: Fundamentals 31

  48. Performance MeasuresLatency in a Node • Little’s Result: How many packets in the box? • If the throughput equals the offered load, which means no loss, the mean occupancy (the mean number of packets in the node) equals the mean throughput multiplied by the mean latency • Occupancy = throughput x latency (assume no loss) mean occupancy = 5 packets 1 packet/sec 1 packet/sec mean latency = 5 secs Figure 1.6 Little’s result: How many packets in the box ? Chapter 1: Fundamentals 32

More Related