1 / 118

Database Middleware for Sensor Networks

Database Middleware for Sensor Networks. Sam Madden Assistant Professor, MIT madden@csail.mit.edu. Slides prepared with Wei Hong. Berkeley Mote. Motivation. Sensor networks (aka sensor webs, emnets) are here Several widely deployed HW/SW platforms

onaona
Télécharger la présentation

Database Middleware for Sensor Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Database Middleware for Sensor Networks Sam Madden Assistant Professor, MIT madden@csail.mit.edu Slides prepared with Wei Hong

  2. Berkeley Mote Motivation • Sensor networks (aka sensor webs, emnets) are here • Several widely deployed HW/SW platforms • Low power radio, small processor, RAM/Flash • Variety of (novel) applications: scientific, industrial, commercial • Great platform for mobile + ubicomp experimentation • Real, hard research problems to be solved • Networking, systems, languages, databases • Central problem: ease of access, appropriate programming abstractions I will summarize: • Low-level sensornet issues • A particular middleware architecture: • TinyDB + TASK • Current and future research middleware ideas

  3. Some Sensornet Apps smart cooling in data centers redwood forest microclimate monitoring http://www.hpl.hp.com/research/dca/smart_cooling/ And More… condition-based maintenance • Homeland security • Container monitoring • Mobile environmental apps • Bird tracking • Zebranet • Home automation • Etc! structural integrity

  4. External Tools Client Tools GUIs,etc Stable Store(DBMS) Middleware Field Tools Local Servers Sensor Network TinyDB Architectural Overview Internet Directed Diffusion COUGAR Middleware Issues: APIs for current + historical access? Which data when? How to act on data? Network and node status?

  5. Declarative Queries • Programming Apps is Hard • Limited power budget • Lossy, low bandwidth communication • Require long-lived, zero admin deployments • Distributed Algorithms • Limited tools, debugging interfaces • Queries abstract away much of the complexity • Burden on the database developers • Users get: • Safe, optimizable programs • Freedom to think about apps instead of details

  6. TinyDB: Declarative Query Interface to Sensornets • Platform: Berkeley Motes + TinyOS • Continuous variant of SQL : TinySQL • Power and data-acquisition based in-network optimization framework • Extensible interface for aggregates, new types of sensors

  7. Agenda • Part 1 : Sensor Networks (40 mins) • TinyOS • NesC • Part 2: TinyDB + TASK (50 mins) • Data Model and Query Language • Software Architecture • 30 minute break • Part 3: Alternative Middleware (1:30 mins) Architectures + Research Directions • Finish around 12

  8. Part 1 • Sensornet Background • Motes + Mote Hardware • TinyOS • Programming Model + NesC • TinyOS Architecture • Major Software Subsystems • Networking Services

  9. Sensor Networks: a hot topic • New university courses • New conferences • ACM SenSys, IEEE IPSN, etc. • New industrial research lab projects • Intel, PARC, MSR, HP, Accenture, etc. • Startup companies • Crossbow, Dust, Ember, Sensicast, Moteiv, etc. • Media Buzz • Over 30 news articles since July 2002 covering Intel-Berkeley/UC Berkeley sensor network activities • One of 10 emerging technologies that will change the world – MIT Technology Review

  10. Why Now? • Commoditization of radio hardware • Cellular and cordless phones, wireless communication • Low cost -> many/tiny -> new applications! • Real application for ad-hoc network research from the late 90’s • Coming together of EE + CS communities

  11. uProc: 4Mhz, 8 bit Atmel RISCRadio:40 kbit 900/450/300 MHz or250 kbit 2.5GHz (MicaZ 802.15.4)Memory:4 K RAM / 128 K Program Flash / 512 K Data FlashPower: 2 x AA or coin cell Mica2Dot Mica Mote iMote Telos Mote uProc: 12Mhz, 16 bit ARMRadio: BluetoothMemory:64K SRAM / 512 K Data FlashPower: 2 x AA uProc: 8Mhz, 16 bit TI RISCRadio: 250 kbit 2.5GHz (802.15.4)Memory:2 K RAM / 60 K Program Flash / 512 K Data FlashPower: 2 x AA Motes

  12. History of Motes • Initial research goal wasn’t hardware • Has since become more of a priority with emerging hardware needs, e.g.: • Power consumption • (Ultrasonic) ranging + localization • MIT Cricket, NEST Project • Connectivity with diverse sensors • UCLA sensor board • Even so, now on the 5th generation of devices • Costs down to ~$50/node (Moteiv, Dust) • Greatly improved radio quality • Multitude of interfaces: USB, Ethernet, CF, etc. • Variety of form factors, packages

  13. Motes vs. Traditional Computing • Embedded OS • Lossy, Adhoc Radio Communication • Sensing Hardware • Severe Power Constraints

  14. NesC: a C dialect for embedded programming Components, “wired together” Quick commands and asynch events TinyOS: a set of NesC components hardware components ad-hoc network formation & maintenance time synchronization NesC/TinyOS Think of the pair as a programming environment

  15. From Ganesan, et al. “Complex Behavior at Scale.” UCLA/CSD-TR 02-0013 Radio Communication • Low Bandwidth Shared Radio Channel • ~40kBits on motes • Much less in practice • Encoding, Contention for Media Access (MAC) • Very lossy: 30% base loss rate • Argues against TCP-like end-to-end retransmission • And for link-layer retries • Generally, not well behaved

  16. Types of Sensors • Sensors attach via daughtercard • Weather • Temperature • Light x 2 (high intensity PAR, low intensity, full spectrum) • Air Pressure • Humidity • Vibration • 2 or 3 axis accelerometers • Tracking • Microphone (for ranging and acoustic signatures) • Magnetometer • GPS • RFID Reader

  17. Non-Volatile Storage • EEPROM • 512K off chip, 32K on chip • Writes at disk speeds, reads at RAM speeds • Interface : random access, read/write 256 byte pages • Maximum throughput ~10Kbytes / second • MatchBox Filing System • Provides a Unix-like file I/O interface • Single, flat directory • Only one file being read/written at a time

  18. Power Consumption and Lifetime • Power typically supplied by a small battery • 1000-2000 mAH • 1 mAH = 1 milliamp current for 1 hour • Typically at optimum voltage, current drain rates • Power = Watts (W) = Amps (A) * Volts (V) • Energy = Joules (J) = W * time • Lifetime, power consumption varies by application • Processor: 5mA active, 1 mA idle, 5 uA sleeping • Radio: 5 mA listen, 10 mA xmit/receive, ~20mS / packet • Sensors: 1 uA -> 100’s mA, 1 uS -> 1 S / sample

  19. Energy Usage in A Typical Data Collection Scenario • Each mote collects 1 sample of (light,humidity) data every 10 seconds, forwards it • Each mote can “hear” 10 other motes • Process: • Wake up, collect samples (~ 1 second) • Listen to radio for messages to forward (~1 second) • Forward data

  20. Sensors: Slow, Power Hungry, Noisy

  21. TinyOS: Getting Started • The TinyOS home page: • http://webs.cs.berkeley.edu/tinyos • Start with the tutorials! • The CVS repository • http://sf.net/projects/tinyos • The NesC Project Page • http://sf.net/projects/nescc • Crossbow motes (hardware): • http://www.xbow.com • Intel Imote • www.intel.com/research/exploratory/motes.htm.

  22. Part 2 The Design and Implementation of TinyDB

  23. Part 2 Outline • TinyDB Overview • Data Model and Query Language • TinyDB Java API and Scripting • Demo with TinyDB GUI • TinyDB Internals • Extending TinyDB • TinyDB Status and Roadmap

  24. Sensor Network TinyDB Revisited SELECT MAX(mag) FROM sensors WHERE mag > thresh SAMPLE PERIOD 64ms • High level abstraction: • Data centric programming • Interact with sensor network as a whole • Extensible framework • Under the hood: • Intelligent query processing: query optimization, power efficient execution • Fault Mitigation: automatically introduce redundancy, avoid problem areas App Query, Trigger Data TinyDB

  25. Feature Overview • Declarative SQL-like query interface • Metadata catalog management • Multiple concurrent queries • Network monitoring (via queries) • In-network, distributed query processing • Extensible framework for attributes, commands and aggregates • In-network, persistent storage

  26. Architecture TinyDB GUI JDBC TinyDB Client API DBMS PC side 0 Mote side 0 TinyDB query processor 2 1 3 8 4 5 6 Sensor network 7

  27. Data Model • Entire sensor network as one single, infinitely-long logical table: sensors • Columns consist of all the attributes defined in the network • Typical attributes: • Sensor readings • Meta-data: node id, location, etc. • Internal states: routing tree parent, timestamp, queue length, etc. • Nodes return NULL for unknown attributes • On server, all attributes are defined in catalog.xml • Discussion: other alternative data models?

  28. Query Language (TinySQL) SELECT <aggregates>, <attributes> [FROM {sensors | <buffer>}] [WHERE <predicates>] [GROUP BY <exprs>] [SAMPLE PERIOD <const> | ONCE] [INTO <buffer>] [TRIGGER ACTION <command>]

  29. Comparison with SQL • Single table in FROM clause • Only conjunctive comparison predicates in WHERE and HAVING • No subqueries • No column alias in SELECT clause • Arithmetic expressions limited to column op constant • Only fundamental difference: SAMPLE PERIOD clause

  30. TinySQL Examples SELECT nodeid, nestNo, light FROM sensors WHERE light > 400 EPOCH DURATION 1s “Find the sensors in bright nests.” Sensors 1

  31. 2 SELECT AVG(sound) FROM sensors EPOCH DURATION 10s • SELECT region, CNT(occupied) AVG(sound) • FROM sensors • GROUP BY region • HAVINGAVG(sound) > 200 • EPOCH DURATION 10s 3 Regions w/ AVG(sound) > 200 TinySQL Examples (cont.) “Count the number occupied nests in each loud region of the island.”

  32. Event-based Queries • ON event SELECT … • Run query only when interesting events happens • Event examples • Button pushed • Message arrival • Bird enters nest • Analogous to triggers but events are user-defined

  33. Query over Stored Data • Named buffers in Flash memory • Store query results in buffers • Query over named buffers • Analogous to materialized views • Example: • CREATE BUFFER name SIZE x (field1 type1, field2 type2, …) • SELECT a1, a2 FROM sensors SAMPLE PERIOD d INTO name • SELECT field1, field2, … FROM name SAMPLE PERIOD d

  34. Using the Java API • SensorQueryer • translateQuery() converts TinySQL string into TinyDBQuery object • Static query optimization • TinyDBNetwork • sendQuery() injects query into network • abortQuery() stops a running query • addResultListener() adds a ResultListener that is invoked for every QueryResult received • removeResultListener() • QueryResult • A complete result tuple, or • A partial aggregate result, call mergeQueryResult() to combine partial results • Key difference from JDBC: push vs. pull

  35. Writing Scripts with TinyDB • TinyDB’s text interface • java net.tinyos.tinydb.TinyDBMain –run “select …” • Query results printed out to the console • All motes get reset each time new query is posed • Handy for writing scripts with shell, perl, etc.

  36. Using the GUI Tools • Demo time

  37. SELECT AVG(temp) WHERE light > 400 T:1, AVG: 225 T:2, AVG: 250 Queries Results Aggavg(temp) Name: temp Time to sample: 50 uS Cost to sample: 90 uJ Calibration Table: 3 Units: Deg. F Error: ± 5 Deg F Get f: getTempFunc()… got(‘temp’) get (‘temp’) Tables Samples getTempFunc(…) Inside TinyDB Multihop Network Query Processor ~10,000 Lines Embedded C Code ~5,000 Lines (PC-Side) Java ~3200 Bytes RAM (w/ 768 byte heap) ~58 kB compiled code (3x larger than 2nd largest TinyOS Program) Filterlight > 400 Schema TinyOS TinyDB

  38. Q:SELECT … A Q Q R:{…} R:{…} Q B C Q Q Q Q R:{…} D R:{…} Q R:{…} Q Q Q F E Q Tree-based Routing • Tree-based routing • Used in: • Query delivery • Data collection • In-network aggregation • Relationship to indexing?

  39. Current Sensor A Sleeping Transmitting Time Radio On, Processing Sensor B Sensor B Power Consumption and Lifetime • Power typically supplied by a small battery • At full power, device will last 2-3 days -> Critical Constraint • Lifetime, power consumption varies by application • Scales with “duty cycle” : amount of time on • Low data rate (< 1 sample / 30 secs) : > 6 months possible from AA batteries Must Synchronize! Fundamental challenge: distributed coordination with low power!

  40. Time Synchronization • All messages include a 5 byte time stamp indicating system time in ms • Synchronize (e.g. set system time to timestamp) with • Any message from parent • Any new query message (even if not from parent) • Punt on multiple queries • Timestamps written just after preamble is xmitted • All nodes agree that the waking period begins when (system time % epoch dur = 0) • And lasts for WAKING_PERIOD ms • Adjustment of clock happens by changing duration of sleep cycle, not wake cycle.

  41. Extending TinyDB • Why extending TinyDB? • New sensors  attributes • New control/actuation  commands • New data processing logic  aggregates • New events • Analogous to concepts in object-relational databases

  42. Adding Attributes • Types of attributes • Sensor attributes: raw or cooked sensor readings • Introspective attributes: parent, voltage, ram usage, etc. • Constant attributes: constant values that can be statically or dynamically assigned to a mote, e.g., nodeid, location, etc.

  43. Adding Attributes (cont) • Interfaces provided by Attr component • StdControl: init, start, stop • AttrRegister • command registerAttr(name, type, len) • event getAttr(name, resultBuf, errorPtr) • event setAttr(name, val) • command getAttrDone(name, resultBuf, error) • AttrUse • command startAttr(attr) • event startAttrDone(attr) • command getAttrValue(name, resultBuf, errorPtr) • event getAttrDone(name, resultBuf, error) • command setAttrValue(name, val)

  44. Adding Attributes (cont) • Steps to adding attributes to TinyDB • Create attribute nesC components • Wire new attribute components to TinyDBAttr configuration • Reprogram TinyDB motes • Add new attribute entries to catalog.xml • Constant attributes can be added on the fly through TinyDB GUI

  45. Adding Aggregates • Step 1: wire new nesC components

  46. Adding Aggregates (cont) • Step 2: add entry to catalog.xml <aggregate> <name>AVG</name> <id>5</id> <temporal>false</temporal> <readerClass>net.tinyos.tinydb.AverageClass</readerClass> </aggregate> • Step 3 (optional): implement reader class in Java • a reader class interprets and finalizes aggregate state received from the mote network, returns final result as a string for display.

  47. TinyDB Status • Latest released with TinyOS 1.1 (9/03) • Install the task-tinydb package in TinyOS 1.1 distribution • First release in TinyOS 1.0 (9/02) • Widely used by research groups as well as industry pilot projects • Successful deployments in Intel Berkeley Lab and redwood trees at UC Botanical Garden • Largest deployment: ~80 weather station nodes • Network longevity: 4-5 months

  48. The Redwood Tree Deployment • Redwood Grove in UC Botanical Garden, Berkeley • Collect dense sensor readings to monitor climatic variations across • altitudes, • angles, • time, • forest locations, etc. • Versus sporadic monitoring points with 30lb loggers! • Current focus: study how dense sensor data affect predictions of conventional tree-growth models

  49. Data from Redwoods 36m 33m: 111 32m: 110 30m: 109,108,107 20m: 106,105,104 10m: 103, 102, 101

  50. TASK

More Related