sparse event detection in wireless sensor networks using compressive sensing n.
Skip this Video
Loading SlideShow in 5 Seconds..
Sparse Event Detection in Wireless Sensor Networks using Compressive Sensing PowerPoint Presentation
Download Presentation
Sparse Event Detection in Wireless Sensor Networks using Compressive Sensing

Sparse Event Detection in Wireless Sensor Networks using Compressive Sensing

181 Vues Download Presentation
Télécharger la présentation

Sparse Event Detection in Wireless Sensor Networks using Compressive Sensing

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. Sparse Event Detection in Wireless Sensor Networks using Compressive Sensing Jia Meng, Husheng Li, and Zhu Han the 43rd Annual Conference on Information Sciences and Systems (CISS), 2009

  2. Outline • Introduction • System Model • Compressive Sensing Algorithm • Simulation Results and Analysis • Conclusions

  3. Introduction • The dogma of signal processing maintains that a signal must be sampled at a Nyguist rate at least twice its bandwidth in order to be represented without error • In practice, we often compress the data soon after sensing, trading off signal representation complexity (bits) for some error(consider JPEG image compression in digital cameras, for example) • Clearly, this is wasteful of valuable sensing/sampling resources

  4. Introduction • In this paper, we investigate how to employ compressive sensing in wireless sensor networks • Specifically, we target on two problems of wireless sensor networks • The number of events is much less compared to the number of all sources • Different events may happen simultaneously and cause interference to detect them individually • To overcome the above two problems, we propose a sparse event detection scheme in wireless sensor networks by employing compressive sensing

  5. System Model • There are a total of N sources randomly located in a field • Those source randomly generate the events to be measured • We denote K as the number of events that the sources generate • K is a random number, and is much smaller than N • We denote as the event vector, in which each component has a binary value, i.e., • Obviously X is a sparse vector since

  6. System Model • In the system, there are M active monitoring sensors trying to capture these events • There are two challenges for those monitoring sensors • All those events happen simultaneously • As a result, the received signals are interfering with each other • The received signal is deteriorated by propagation loss and thermal noise

  7. System Model • The received signal vector can be written as • is the thermal noise vector whose component is independent and has zero mean and variance of • is the channel response matrix whose component can be written as • is the distance from the source to the sensing device • is the propagation loss factor • is the Raleigh fading modeled as complex Gaussian Noise with zero mean and unit variance

  8. System Model • Notice that the number of events, the number of sensors, and total number of sources have the following relation • Consequently, the received signal vector Y is an condensed representation of the event • Event vector Y has aliasing of vector X, due to the low sampling rate M

  9. Compressive Sensing Algorithm • Problem Formulation and Analysis • Bayesian Detection • Model Specification • Marginal Likelihood Maximization • Heuristic using Prior Information

  10. Problem Formulation and Analysis • Definition : Restricted Isometry Property (RIP)For any vector V sharing the same K nonzero entries as X, iffor some , , then the matrix G preserves the information of the K-sparse signal. • It has been proved that if G is an i.i.d. Gaussian matrix or random ±1 entry matrix, then the K-sparse signal is compressible with high probability if

  11. Problem Formulation and Analysis • Since M < N there are infinite number of satisfy • The problem is to find the sparse reconstructed signal • The above optimization is called the l1-magic in the literature • The complexity is

  12. Bayesian Detection • Considering the fact that the components of X are either 0 or 1 • we adopt the Bayesian compressive sensing [12–14], which is fully probabilistic and introducing a set of hyper-parameters [12] M. E. Tipping, “Sparse Bayesian learning and the relevance vector machine”, Journal of Machine Learning Research, vol. 1, p.p. 211-244, Sept. 2001. [13] M. E. Tipping and A. C. Faul, “Fast marginal likelihood maximisation for sparse Bayesian models”, in Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics, Key West, FL, Jan 3-6. [14] S. Ji, Y. Xue and L. Carin, “Bayesian compressive sensing”, IEEE Trans. Signal Processing, vol. 56, no. 6, June 2008.

  13. Maximum Likelihood Estimation (MLE) • 假設有五個袋子,各袋中都有無限量的餅乾(櫻桃口味或檸檬口味),已知五個袋子中兩種口味的比例分別是 • 櫻桃 100% • 櫻桃 75% + 檸檬 25% • 櫻桃 50% + 檸檬 50% • 櫻桃 25% + 檸檬 75% • 檸檬 100% • 從同一個袋子中連續拿到2個檸檬餅乾,那麼這個袋子最有可能是上述五個的哪一個? • Ans : 5 0 0.252 0.502 0.752 1

  14. Maximum a posteriori (MAP) • 假設有五個袋子,各袋中都有無限量的餅乾(櫻桃口味或檸檬口味),已知五個袋子中兩種口味的比例分別是 • 櫻桃 100% (拿到的機率0.1) • 櫻桃 75% + 檸檬 25% (拿到的機率0.2) • 櫻桃 50% + 檸檬 50% (拿到的機率0.4) • 櫻桃 25% + 檸檬 75% (拿到的機率0.2) • 檸檬 100% (拿到的機率0.1) • 從同一個袋子中連續拿到2個檸檬餅乾,那麼這個袋子最有可能是上述五個的哪一個? • Ans : 4 0.1 × 0=0 0.2 × 0.252 =0.0125 0.4 × 0.502=0.1 0.2 × 0.752=0.1125 0.1 × 1=0.1

  15. Model Specification • The noise in the system is composed of propagation loss with zero mean and variance • The probability density function can be approximated as Gaussian distribution as • Due to the assumption of independence of , he likelihood of the complete data set can be written as

  16. Model Specification • The real distribution of X is Bernoulli distribution • However, the close form solution in our problem is hard to be obtained • Instead, we assume a zero-mean Gaussian prior distribution over the signal X • where is a vector of N independent hyper-parameters

  17. Model Specification • Given , the posterior parameter distribution conditioned over the signal is given by combining the likelihood and prior with Bayes’ rule • which is a Gaussian distribution ith covariance and mean of

  18. Marginal Likelihood Maximization • The sparse Bayesian model is formulated as the local maximization with respect to of the marginal likelihood, or equivalently its logarithm • with

  19. Marginal Likelihood Maximization • A point estimate for the parameters is then obtained by evaluating (11) with , giving a posterior mean approximator • However, marginal likelihoods are generally difficult to compute, i.e., values of and which maximize cannot be obtained in closed form • For the updating of , differentiate (12), and then equate it to 0. After rearranging, we have

  20. Marginal Likelihood Maximization • where is the posterior mean signal from (11), and is defined aswith being the diagonal element of the posterior signal covariance from (10) computed with current and values • For the variance , differentiation leads to re-estimate

  21. Heuristic using Prior Information • After the reconstruction of , if the algorithm converges to wrong results, there are two possible situations • The algorithm can converge to either around 0 and 1, but with the wrong position for the sparse events • could not be easily distinguished • have values deviating from 0 or 1 • easy to find the error using threshold methods

  22. Heuristic using Prior Information

  23. Simulation Results and Analysis • There are a total of N = 256 events randomly located within 500m-by-500m area • The M wireless sensors are also randomly located within this area • The minimal distance between a event and a sensor is 5m • The propagation loss factor is 3 • The transmitted power is normalized to 1 and the thermal noise is 10-12 • The number of random events is K which is a small number

  24. Simulation Results and Analysis Proposed method l1-magic

  25. Simulation Results and Analysis Illustration of Correct Detection Illustration of Incorrect Detection

  26. Simulation Results and Analysis • Heuristic Improvement

  27. Simulation Results and Analysis • Noise Effect

  28. Conclusions • Propose a compressive sensing method for sparse event detection in wireless sensor networks • Formulate the problem and propose solutions • Introduced a fully probabilistic Bayesian framework which helps dramatically reduce the sampling rate