700 likes | 916 Vues
Wireless Medium Access Control. Nitin Vaidya Department of Computer Science Texas A&M University vaidya@cs.tamu.edu. Acknowledgements. Joint work with Victor Bahl, Anurag Dugar, Seema Gupta, Young-Bae Ko. Outline. Introduction MAC protocols Fair scheduling MAC Directional MAC.
E N D
Wireless Medium Access Control Nitin Vaidya Department of Computer Science Texas A&M University vaidya@cs.tamu.edu
Acknowledgements • Joint work with Victor Bahl, Anurag Dugar, Seema Gupta, Young-Bae Ko
Outline • Introduction • MAC protocols • Fair scheduling MAC • Directional MAC
Wireless Local Area Network • Similar to a wired LAN, but no wires • Nodes communicate over a broadcast wireless channel B A Wireless channel C D
Mobile Ad Hoc Network • Logical next step • Multi-hop wireless • Nodes may be mobile
Mobile Ad Hoc Network • Network formed by wireless mobile hosts without necessarily using an infrastructure C B A D E F
Many Applications • Emergency operations • search-and-rescue • policing and fire fighting • Extending the reach of infrastructure • Personal area networking • cell phone, laptop, ear phone, wrist watch • Military environments • soldiers, tanks, planes • Civilian environments • mobile robots • meeting rooms • sports stadiums • boats, small aircraft
Medium Access Control: Our Research • Distributed fair scheduling • MAC for directional antennas • Power-aware MAC • Adaptive MAC
Our Research onMedium Access Control (MAC) • Self-imposed constraint : Maintain compatibility / resemblance to a standard • Much of our work focuses on IEEE 802.11 • Some work related to Hiperlan
Distributed Fair MAC • Distributed • Fair
Centralized Medium Access Control Protocols • Base station coordinates access to the wireless channel Node 1 Node 2 Base Station Node 3 Node n
Distributed MAC Protocols • All nodes have identical responsibilities • We consider a LAN (single-hop network) here Node 1 Node 2 Wireless LAN Node 3 Node n
Disadvantages of Centralized Approach • If a node cannot talk to the base station, it cannot transmit to any other nodes • Base station needs to keep track of state of other nodes • Hard to use failure-prone nodes as coordinators in centralized protocols
Weighted Fairness • Packets to be transmitted belong to several flows • Each flow is assigned aweight • Bandwidth assigned to each backlogged flow is proportional to its weight
Two flows backlogged Example: Three flows with weights 2 1 1 All flows backlogged
Why Weighed Fairness ? • Distribute bandwidth “uniformly” in proportion of weights • AAAABBBB… versus ABABABAB… • Can administratively control bandwidth sharing • Possible to bound end-to-end delay for leaky bucket constrained traffic [Parekh] (under ideal conditions) • Useful for diffserv
Flow 1 Output link Flow 2 Flow n Fair Queueing • Many centralized fair queueing protocols exist • WFQ, WF2Q, SCFQ, SFQ, … • Scheduler needs to know state of all flows
Our Objectives • Fully distributed fair scheduling protocol • Should not have to explicitly exchange state information
Proposed Approach Combination of • IEEE 802.11 Distributed Coordination Function (DCF) • A centralized fair queueing protocol
IEEE 802.11 Distributed Coordination Function CSMA / CA • Carrier Sense Multiple Access • Collision Avoidance
IEEE 802.11 (CSMA/CA) • Choose a backoff interval in [ 0,cw ] • Count down backoff interval only when medium is idle • When counter reaches 0, transmit backoff interval 0 cw (contention window)
B1 = 25 B1 = 5 wait Data Data wait B2 = 10 B2 = 20 B2 = 15 802.11 DCF Example B1 and B2 are backoff intervals at nodes 1 and 2 cw = 31
Self-Clocked Fair Queueing (SCFQ)[Golestani] • A centralized fair scheduling protocol • But more amenable to distributed implementation than many others
Self-Clocked Fair Queueing (SCFQ)[Golestani] • Maintains a virtual clock • Each packet is assigned start tag and finish tag • Start tag = max (current virtual clock, last finish tag for the flow) • Finish tag = start tag + length/weight • Packet with smallest finish tag is transmitted next • Virtual clock is updated to finish tag of packet in service
Distributed Fair Scheduling • Backoff intervala (finish tag – virtual clock) a length / weight • No need to maintain explicit virtual clock • Distributed implementation of SCFQ • Smallest finish tag determined using backoff intervals • Backoff interval proportional to finish tag
Distributed Fair Scheduling (DFS) • Choose backoff interval = length / weight packet length = 5 weight = 1/3 backoff interval = 5 / (1/3) = 15 slots
B1 = 10 B1 = 5 B1 = 15 wait wait Collision ! Data Data B2 = 5 B2 = 5 B2 = 5 Distributed Fair Scheduling (DFS) Packet length = 15 Weight of node 1 = 1 ====> B1 = 15 / 1 = 15 Weight of node 2 = 3 ====> B2 = 15 / 3 = 5
Collisions • Collisions occur when two nodes count down to 0 simultaneously • In centralized fair queueing, ties can be broken without causing “collisions” • To reduce the possibility of collisions: Backoff interval = Scaling_Factor * length / weight * random number with mean 1
Backoff Interval • Initial formula: Length / weight = 15 / 1 = 15 • Scaling_factor * length / weight * random number =4 * 15 / 1 * [0.9,1.1] = [54,66] 0 15
Backoff Interval 802.11 0 cw Proposed DFS 0
Collision Resolution • Collision occurs when two nodes count down to 0 simultaneously • Counting to 0 implies that it is a given node’s “turn” to transmit • To reduce “priority” reversals, a small backoff interval is chosen after the first collision • Backoff interval increased exponentially on further collisions
Impact of Small Weights • Backoff interval: Scaling_factor * length / weight * random number • Backoff intervals can become large when weights are small • Large backoff intervals may degrade performance (time wasted in counting down)
Impact of Small Weights • Recall: Backoff intervals are being used to compare “length/weight” • Intuition: Any non-decreasing function of length/weight may to calculate backoff
Alternative Mappings Chosen backoff interval Linear mapping SQRT EXP Scaling_factor * length / weight * random number
Alternative Mappings • Advantage • smaller backoff intervals • less time wasted in counting down when weights of all backlogged flows are small • Disadvantage • backoff intervals that are different on a linear scale may become identical on the compressed scale • possibility for greater number of collisions
Performance Evaluation • Using modified ns-2 simulator: 2 Mbps channel • Number of nodes = N • Number of flows = N/2 • Odd-numbered nodes are destinations, even-numbered nodes are sources • Unless otherwise specified: • flow weight = 1 / number of flows • backlogged flows with packet size 584 bytes (including UDP/IP headers) • Scaling_Factor = 0.02
Throughput / Weight Variation Across Flows (with 16 Flows) 802.11 Flatter curve is fairer DFS is fairer Throughput / Weight Flow destination identifier
Throughput - Fairness Trade-Off 802.11 Aggregate throughput (all flows combined) Number of flows
Fairness Index • Fairness index: function of (throughput T / weightf) for each flowf over some interval of time • Unless specified, the interval is 6 seconds
Throughput - Fairness Trade-Off Fairness index 802.11 Number of flows
Impact of Scaling Factor(six flows with weights 1/2,1/4,1/8,1/16,1/32,1/32) DFS Fairness index Scaling Factor
Impact of Scaling Factor(six flows with weights 1/2,1/4,1/8,1/16,1/32,1/32) Aggregate throughput DFS Scaling factor
Scaled 802.11 • Is DFS fairer simply because it uses large backoff intervals ? • Fairness of 802.11 can also be improved by using larger backoff intervals • Scaled 802.11 = 802.11 which uses backoff interval range comparable with DFS
Backoff Interval 802.11 0 cw DFS 0 Scaled 802.11
Fairness versus Sampling Interval Size(24 flows) DFS Scaled 802.11 Fairness index 802.11 Interval Size
Short-Term Fairness Narrow distribution is fairer DFS is fairer DFS Frequency Scaled 802.11 802.11 Number of packets transmitted by a flow (over 0.04 second windows)