1 / 43

Doped Fountain Coding for Minimum Delay Data Collection in Circular Networks

Doped Fountain Coding for Minimum Delay Data Collection in Circular Networks. Silvija Kokalj-Filipovi´c, Predrag Spasojevi´c, and Emina Soljanin IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 27, NO. 5, JUNE 2009. Outline. Network model Goal The Proposed S cheme

noah
Télécharger la présentation

Doped Fountain Coding for Minimum Delay Data Collection in Circular Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Doped Fountain Coding for Minimum Delay DataCollection in Circular Networks Silvija Kokalj-Filipovi´c, Predrag Spasojevi´c, and Emina Soljanin IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 27, NO. 5, JUNE 2009

  2. Outline • Network model • Goal • The Proposed Scheme • Data Dissemination • Decentralized Squad-based Storage Encoding • Collection and Decoding • Doped Ripple Evolution • Comparative Cost Analysis

  3. Random Geometric Graph[13],[14] • Randomly placing N nodes in a given area. • There are 3 parameter for the graph construction. • N : the total number of sensor nodes • A : the size of the monitored area • r : the transmission range [13] Z. Kong, S. Aly, and E. Soljanin. Decentralized coding algorithms for distributed storage in wireless sensor networks. invited for IEEE J. Sel. Areas Commun., Spec. Issue on Data Comm. Techniques for Storage Channels and Networks, Jan 2009. [14] Y. Lin, B. Liang, and B. Li. Data persistence in large-scale sensor networks with decentralized fountain codes. In IEEE Infocom, May 2007.

  4. Circular Networks • A circular network allows for a significantly more (analytically and otherwise) tractable strategies relative to a network whose model is a random geometric graph. • A randomly deployed network can self-organize into concentric donut-shaped networks.[12] [12] S. Kokalj-Filipovic, R. Yates, and P. Spasojevic. Infrastructures for data dissemination and in-network storage in location-unaware wireless sensor networks. Technical report, WINLAB, Rutgers University, 2007.

  5. Circular Networks • Why circular network? • Data dissemination efficiently from multiple sources to network storage nodes is difficult in fountain type storage approach employed with RGG models. • incorporate the wireless multicast advantage[18] into the dissemination/storage model.

  6. The data is collected from storage nodes of which most (reside in a set of s adjacent squads(form a supersuade).

  7. Squad nodes can hear transmissions either from only relay i or from, also relay i + 1. :own set of squad-nodes :shared set of squad-nodes

  8. Goal • The data packets are efficiently disseminated • and stored in a manner which allows for a low collection delay upon collector’s arrival.

  9. The Proposed Scheme

  10. The Proposed Scheme • 1) An IDC collects a minimum set of coded packets from subset of storage squads in its proximity. (upfront collection) • 2) If the decoder stalls, dope the source packet which restarts decoding. (doping collection) • increase delay

  11. The Proposed Scheme • The collection delay is relative to • the number of doped packets (small) • collection hops • Employ the doping collection scheme to reduce the number of collection hops • The random-walk-based analysis of the decoding/doping process represents the key contribution of this paper.

  12. Data Dissemination

  13. Data Dissemination • 2 dissemination methods • 1) no combining • 2) degree-two combining • Each relay performs a total of first-hop exchanges.[17] • The storage nodes overhear degree-two packet transmissions.[11]

  14. Decentralized Squad-based Storage Encoding • Assume that the storage squad nodes can hear any of k dissemination transmissions from the neighboring relay nodes. • 2 storage methods: • Non-combining(coupon collection) • combining • We present an analysis of why IS turns out to be better than RS when BP doping is used.

  15. Belief Propagation Decoding Input symbol Output symbol c1 a1 c2 a2 c3 a3 = c4 a4 c5 a5 c6 released = {c2,c4,c6} covered = {a1,a3,a5} processed = { } ripple = {a1,a3,a5} STATE: ACTION: Process a1

  16. Belief Propagation Decoding c1 a1 = c2 a2 c3 a3 c4 a4 c5 a5 c6 released = {c2,c4,c6,c1} covered = {a1,a3,a5} processed = {a1} ripple = {a3,a5} STATE: = ACTION: Process a3

  17. Belief Propagation Decoding • Start with a decoding matrix = . • code symbols that are linear combinations of k unique input symbols. • Degree distribution • , where =. • After t input symbols have been processed • = • (degree d ∈{1, · · · , k −t } )

  18. Belief Propagation Decoding • Based on the assumption that the number of decoded symbols is increased by one with each processed ripple symbol.[5] • Doping mechanism • to unlock the BP process stalled at iteration t, the degree-two doping strategy selects the doping symbol from the set of input symbols( not from ripple) connected to the degree-two output symbols in graph .

  19. Belief Propagation Decoding Doping mechanism degree-two doping strategy

  20. Symbol Degree Evolution • Assume , the probability distribution of the unreleased output node degrees at any time t remains the Ideal Soliton. (1) (2) Ideal Soliton

  21. Doped Ripple Evolution • Our goal is to obtain the expected number of times the doping will occur by studying the ripple evolution. • The interdoping yields, is the number of symbols decoded between two dopings. • :the time at which ith dopings occurs.

  22. Doped Ripple Evolution • , where is a small positive value. • At time l : the total number of decoded and doped symbols is l, and • , is the number of (unreleased) output symbols, where • , is the unreleased output symbol degree distribution polynomial at time , , and

  23. Doped Ripple Evolution • In order to describe the ripple process evolution, we characterize the ripple increment when corresponds to • The decoding iteration • The doping iteration

  24. Doped Ripple Evolution (decoding iteration) • The number of released symbols at any decoding step is modeled by a discrete random variable . , where denotes Poisson distribution, since

  25. Doped Ripple Evolution (decoding iteration) • , is the R.V. of the increments of the ripple process with the probability distribution η(r+1) (for= r) • , and • .

  26. Doped Ripple Evolution (degree-two doping iteration) • Degree-two doping selects uniformly at random a row in the decoding matrix that has one or more non-zero elements in columns of degree two at time when the ripple is empty. • This is equivalent to randomly selecting a column of degree two to be released.

  27. Doped Ripple Evolution(degree-two doping iteration) • is a R.V. of the doping ripple increment, • the shifted distribution for =r. • for doping instant t = , and the ripple size for t∈ [,] is • +2, where • is a random walk modeling the ripple evolution.

  28. Doped Ripple Evolution(degree-two doping iteration) • The expected interdoping yield is the expected time it takes for the ripple random walk + 2 to become zero. • i-th stopping time (doping) ,

  29. Doped Ripple Evolution: Random Walk Model • The Markov Chain model of the random walk : • State corresponds to the ripple of the size

  30. Doped Ripple Evolution: Random Walk Model • The start of the decoding process is modeled by the MC being in the initial state 3. • The probability of being in the trapping state, while at time , is • (11) • Hence, the probability of entering the trapping state at time t is

  31. Doped Ripple Evolution: Random Walk Model • The stopping time interval • evaluated using the following recursive probability expression

  32. Doped Ripple Evolution: Random Walk Model • The number of decoded symbols after hth doping is • The expected number of dopings sufficient for complete decoding is the stopping time of the random walk , where the stopping threshold is • is the expected number of unrecovered symbols when coded symbols are collected.

  33. Doped Ripple Evolution: Random Walk Model Where replacing with . We can approximate .

  34. Doped Ripple Evolution: Random Walk Model

  35. Comparative Cost Analysis • Analyze the performance of this approach in terms of data collection cost. • upfront collection cost • polling cost • The only degree of freedom is the coverage redundancy factor (squad size) • squad size

More Related