1 / 16

Department of Computer Science Southern Illinois University Carbondale Self-actuation of Camera Sensors for Redundant D

Department of Computer Science Southern Illinois University Carbondale Self-actuation of Camera Sensors for Redundant Data Elimination in Wireless Multimedia Sensor Networks. Andrew Newell & Kemal Akkaya. Wireless Multimedia Sensor Networks (WMSNs).

tuari
Télécharger la présentation

Department of Computer Science Southern Illinois University Carbondale Self-actuation of Camera Sensors for Redundant D

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Department of Computer Science Southern Illinois University Carbondale Self-actuation of Camera Sensors for Redundant Data Elimination in Wireless Multimedia Sensor Networks Andrew Newell & Kemal Akkaya Self-actuation of Camera Sensors for Redundant Data Elimination in WMSNs

  2. Wireless Multimedia Sensor Networks (WMSNs) • Incorporation of low cost camera sensors to wireless sensor networks • Camera Sensors: • Can capture multimedia (image/video/audio) data • Can communicate with the nearby sensors and/or with the base-station • Scalar Sensors: • Low-cost • Detects nearby phenomena (e.g., motion or temperature sensors) • WMSNs include large number of scalar sensors and camera sensors • Field of View (FoV) of a camera defined by depth of view (s) and view width (w) Self-actuation of Camera Sensors for Redundant Data Elimination in WMSNs

  3. WMSNs have various applications Habitat monitoring Wild-life could be monitored visually within its natural habitat Security surveillance WMSN could be applied to visually capture intruders in a particular environment Particularly outside where the frequency of incidents is low Challenges of WMSNs Handling huge multimedia data Data processing Compression, aggregation, fusion, etc. Saving battery life is crucial Data communication under limited bandwidth Providing quality of service Routing, MAC protocols WMSN Applications & Challenges • Coverage • Event coverage with proper camera placement • Art-gallery problem • WMSN architectures for different levels of camera resolution • Our focus is event coverage • On-demand event coverage to save energy Self-actuation of Camera Sensors for Redundant Data Elimination in WMSNs

  4. System Model for Our Research • Scalar sensors and camera sensors are placed randomly and uniformly within a region of interest • The sensors are aware of their own position (and orientation in the case of camera sensors) • An event area will be selected randomly and scalar sensors will detect the event if they are within this area Self-actuation of Camera Sensors for Redundant Data Elimination in WMSNs

  5. Problem Definition • Given: • Uniformly-random deployed camera and scalar sensors in an area • Scalar sensors will detect the same event • Problem Definition: • Devise a distributed algorithm to • Determine the proper cameras to be actuated when a particular event is detected by scalar sensors • Use the least number of cameras to cover the event and eliminate coverage redundancies • Do not send redundant multimedia data Self-actuation of Camera Sensors for Redundant Data Elimination in WMSNs

  6. Proposed Distributed Actuation Algorithm • Algorithm will be run on each camera sensor at the same time • Main Idea: • Scalar sensors will inform camera sensors when an event is detected • Camera sensors will then communicate in rounds with each other to determine actuation based on unique coverage of the event • Each camera sensor will maintain information about its neighboring camera and scalar sensors • This information is gathered initially and when an event is detected • They only use local message exchanges (i.e., 1 hop) • After the initial step camera sensors are turned off • They are turned on when they hear from scalar sensors • They start collecting multimedia data only when the outcome of the actuation algorithm is positive Self-actuation of Camera Sensors for Redundant Data Elimination in WMSNs

  7. Tables Maintained at Each Camera Sensor Self-actuation of Camera Sensors for Redundant Data Elimination in WMSNs • FOV: IDs of all scalar sensors within FoV • This table is initialized and then remains static • DES: IDs of scalar sensors that detected the event within FoV • This table will be set once a camera sensor has received detection messages from all neighboring scalar sensors • VS: Camera sensors with actuation priority • Camera sensors that cover a larger portion of an event will have priority for actuation • CES :IDs of scalar sensors already covered by other camera sensors • This table will be updated when neighboring cameras make an actuation decision

  8. Communication among Camera Sensors Self-actuation of Camera Sensors for Redundant Data Elimination in WMSNs • Priority is assigned based on the number of scalar sensors that have detected the event that a camera sensor covers • If a camera sensor has the highest priority in its neighborhood • The camera sensor will decide either to actuate or not depending on the ratio of uniquely covered scalar sensors • We define an α value which can be tuned based on the application • If uniquely covered area is less than the α value, the camera is not actuated • Then the camera sensor will share via an ‘UPDATE’ message with its neighborhood that it has made a decision and the newly updated covered scalar sensors • If a camera is not the highest priority in its neighborhood • The camera sensor will wait until it has received ‘UPDATE’ messages from all neighbors that have higher priority

  9. Simulations • Setup: Randomly uniformly placed camera and scalar sensors in an area of interest • 500 runs for each simulation • We tested performance under • Varied alpha value • Varied number of camera sensors • Varied transmission range • Varied sensing range (depth of view) • Performance Metrics: • Coverage – Ratio of covered area to the total area of the event • FoV Utilization – Ratio of the area of FoVs covering the event to the total area of FoVs of all actuated camera sensors • Baseline: Coverage compared to a theoretical maximum value when all cameras in the are are actuated Self-actuation of Camera Sensors for Redundant Data Elimination in WMSNs

  10. Theoretical Maximum Event Coverage • We prove the following theorem for a theoretical maximum event coverage • s – depth of view of a camera sensor’s FoV • w – angle width of a camera sensor’s FoV • b – area of the monitored region Self-actuation of Camera Sensors for Redundant Data Elimination in WMSNs

  11. Varied Alpha Value • As explained earlier, the FoV Util. does increase as alpha increases, but the event coverage suffers as alpha increases • When alpha is zero, then the area covered should be close to the theoretical max which is true here Self-actuation of Camera Sensors for Redundant Data Elimination in WMSNs

  12. Varied Number of Camera Sensors • Total event coverage does increase as the number of camera sensors increases, and this is due to more chances for camera sensors to cover portions of the event • FoV Util. decreases, since it is harder to eliminate FoV overlaps of actuated camera sensors when the density of camera sensors increases Self-actuation of Camera Sensors for Redundant Data Elimination in WMSNs

  13. Varied Transmission Range • Alpha = 0.45 • FoV Util. increases with transmission range, since more communication is possible to avoid FoV overlaps • Total coverage decreases as transmission range increases, since FoV Util. increase will come at the cost of total coverage Self-actuation of Camera Sensors for Redundant Data Elimination in WMSNs

  14. Varied Depth of View • Behaves much like the varied number of camera sensors simulations • Increasing camera sensors increases the chances of FoV overlaps • Increasing the depth of view has the same effect Self-actuation of Camera Sensors for Redundant Data Elimination in WMSNs

  15. Conclusion • We designed a distributed algorithm • Attempts to improve total event coverage while reducing redundant multi-media data • Keeps messaging limited • Simulations show that algorithm can • Actuate only the necessary cameras when an event is detected to save energy • provide significant redundant data elimination with minor sacrifice from the coverage • Current Work: • Performance under manual placement strategies • Using event boundaries instead of counting the scalar sensors • Future Work: • Investigating multi-perspective coverage Self-actuation of Camera Sensors for Redundant Data Elimination in WMSNs

  16. THANK YOU Questions? Self-actuation of Camera Sensors for Redundant Data Elimination in WMSNs

More Related