1 / 15

Wireless scheduling analysis

Wireless scheduling analysis . (With ns3) By Pradeep Prathik Saisundatr. EFFORT LIMITED FAIR SCHEDULING. Wireless links exhibit substantial rates of link errors resulting in significant and unpredictable loss of link capacity.

base
Télécharger la présentation

Wireless scheduling analysis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Wireless scheduling analysis (With ns3) By Pradeep Prathik Saisundatr

  2. EFFORT LIMITED FAIR SCHEDULING • Wireless links exhibit substantial rates of link errors resulting in significant and unpredictable loss of link capacity. • The scheduler model distinguishes between effort (air time spent by flow) and outcome (actual throughput achieved by the flow). • Weighted Fair Queuing: Assumes an error free link and distributes effort according to weights provided by an admission module • Effort Limited Fair Scheduling: Extension of WFQ but it limits how much effort is given to any specific flow , so that one flow experiencing very high error rates cannot degrade the performance of the entire link. • Power Factor: In ELF the scheduler should adjust flow weights in response to errors in order to create an hybrid between effort fairness and outcome fairness , this is called Power Factor.

  3. Power Factor • In order to characterize the behaviour of ELF Scheduler , we introduce the following notation. • Let us assume we have N flows sharing a link with Bandwidth B. Each flow has a weight Wi, a Power Factor Pi and an Error Rate Ei . • We can define the adjusted weight of the flow as : • Ai = min ( Wi / 1 – Ei , Pi x Wi ) • The throughput Ti for for flow i is given by the product of the transmission time it receives and its success rate, • Ti = (Ai /∑jAj) x B x (1- Ei) • The highest fraction of the link time that flow can take is • Pi x Wi / (( Pi x Wi ) + ∑j ≠I x Wj ) • Thus , An ELF scheduler strives to achieve the outcome that is envisioned by users (e.g., weighted link sharing or fixed rate reservations) while limiting the effort spent on a flow using a per flow parameter called the power factor, which can be used to administratively implement a variety of fairness and efficiency policies.

  4. DISTRIBUTED FAIR SCHEDULING • Wireless links exhibit substantial rates of link errors resulting in significant and unpredictable loss of link capacity. • The DFS protocol borrows SCFQ’s idea of transmitting the packet whose finish tag is the smallest, as well as SCFQ’s mechanism for updating the virtual time. • A distributed approach for determining the smallest finish tag is employed, using the back off interval mechanism from IEEE 802.11 MAC. The essential idea is to choose a back off interval that is proportional to the finish tag of the packet to be transmitted. • Power Factor: In ELF the scheduler should adjust flow weights in response to errors in order to create an hybrid between effort fairness and outcome fairness , this is called Power Factor.

  5. IMPLEMENTATION • Each node i maintains a local virtual clock vi(t) where vi(t)=0 . Now Pikrepresents the kthpacket arriving at the flow at node i on the LAN. Each transmitted packet is tagged with its finish tag. With finish tag Z, node i sets its virtual clock equal to maximum (vi (t), Z). • Start and finish tags for a packet are not calculated when the packet arrives. Instead, the tags for a packet are calculated when the packet reaches the front of its flow. When packet Pik reaches the front of its flow at node i, the packet is stamped with start tag Sik calculated as Sik= v(fik) denotes the real time when packet reaches the front of the flow. fikis calculated as follows : • fik = Sik+ Scaling Factor * (L_i^K)/φ_k • = v(fik) + Scaling Factor * (L_i^K)/φ_k • The objective of the next step is to choose a back off interval such that a packet with a smaller finish tag will ideally be assigned a smaller back off interval. This step is performed at time fik Specifically, node i picks a back off interval Bi for packet Pik as a function of fik and the current virtual time vi(fik) , as follows , • Bi = | fik- vi(fik) | • Now observe that , fik = Sik+ Scaling Factor * (L_i^K)/φ_k . • Finally, to reduce the possibility of collisions, we randomize the Bi value chosen as Bi = * Bi • Where is a random variable with mean 1. In our simulations, is uniformly distributed in the interval .When this step is performed, a variable named Collision Counter is reset to 0.

  6. COLLISION • Collision handling: If a collision occurs (because back off intervals of two or more nodes count down to 0 simultaneously), then the following procedure is used. Let node i be one of the nodes whose transmission has collided with some other node(s).Node i chooses a new back off interval as follows: • Increment Collision Counter by 1. • Choose new Bi , which is uniformly distributed . • The above procedure tends to choose a relatively small Bi (in the range [1,CollisionWindow]) after the first collision for a packet. The motivation for choosing small Bi after the first collision is as follows: The fact that node i was “a potential winner” of the contention for channel access indicates that it is node i’s turn to transmit in the near future. • Therefore, Bi is chosen to be small to increase the probability that node i wins again soon. However, to protect against the situation when too many nodes collide, the range for Bi grows exponentially with the number of consecutive collisions.

  7. Fair Real Time Scheduling over a Wireless LAN:Scheduling parameters • Periodic packets with soft deadline • Packets have constant BitRate • Each flow “i” represented by a tuplefi which consists of Period v, Deadline D and Packet loss rate e • A(Pik)= Time at which Kth packet of a flow fiis ready to be transmitted. • Di= Deadline of the flow

  8. Earliest Deadline First • A packet Pik is scheduled at time ‘t’ if • A(Pik)≤ t ≤ d(Pik) • Where d(Pik) is the minimum among all the packets • d(Pik) = A(Pik) + Di

  9. Greatest Degradation First • At a scheduling time ‘t’ , a packet with maximum degradation value is scheduled. • Scheduling performance measure through • Throughput • System Degradation

  10. NS3 • open source licensing (GNU GPLv2) and development model • Python scripts or C++ programs • alignment with real systems (sockets, device driver interfaces)‏ • alignment with input/output standards (pcap traces, ns-2 mobility scripts)‏ • testbed integration is a priority • modular, documented core

  11. ns-3 models Project focus has been on the software core, to date 5

  12. The basic model Application Application Application Application Sockets-like API Protocol stack Protocol stack Packet(s)‏ Node Node NetDevice NetDevice NetDevice Channel NetDevice Channel

  13. Node basics A Node is a husk of a computer to which applications, stacks, and NICs are added Application Application Application “DTN”

  14. NetDevices and Channels-similar to NIC cards NetDevices are strongly bound to Channels of a matching type Nodes are architected for multiple interfaces WifiChannel WifiNetDevice

  15. Node basics Two key abstractions are maintained: 1) applications use an (asynchronous, for now) sockets API 2) the boundary between IP and layer 2 mimics the boundary at the device-independent sublayer in Linux i.e., Linux Packet Sockets

More Related