1 / 121

Lecture 2: Software Platforms

Lecture 2: Software Platforms. Anish Arora CIS788.11J Introduction to Wireless Sensor Networks Lecture uses slides from tutorials prepared by authors of these platforms. Outline. Discussion includes OS and also prog. methodology some environments focus more on one than the other

Télécharger la présentation

Lecture 2: Software Platforms

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.


Presentation Transcript

  1. Lecture 2: Software Platforms Anish Arora CIS788.11J Introduction to Wireless Sensor Networks Lecture uses slides from tutorials prepared by authors of these platforms

  2. Outline • Discussion includes OS and also prog. methodology • some environments focus more on one than the other • Focus is on node centric platforms (vs. distributed system centric platforms) • composability, energy efficiency, robustness • reconfigurability and pros/cons of interpreted approach • Platforms • TinyOS (applies to XSMs) slides from UCB • EmStar (applies to XSSs) slides from UCLA • SOS slides from UCLA • Contiki slides from Upsaala • Virtual machines (Maté) slides from UCB

  3. References • NesC, Programming Manual, The Emergence of Networking Abstractions and Techniques in TinyOS, TinyOS webpage • EmStar: An Environment for Developing Wireless Embedded Systems Software, Sensys04 Paper, EmStar webpage • SOS Mobisys paper, SOS webpage • Contiki Emnets Paper, Sensys06 Paper, Contiki webpage • Mate ASPLOS Paper, Mate webpage, (SUNSPOT)

  4. Traditional Systems • Well established layers of abstractions • Strict boundaries • Ample resources • Independent applications at endpoints communicate pt-pt through routers • Well attended Application Application User System Network Stack Transport Threads Network Address Space Data Link Files Physical Layer Drivers Routers

  5. Sensor Network Systems • Highly constrained resources • processing, storage, bandwidth, power, limited hardware parallelism, relatively simple interconnect • Applications spread over many small nodes • self-organizing collectives • highly integrated with changing environment and network • diversity in design and usage • Concurrency intensive in bursts • streams of sensor data & network traffic • Robust • inaccessible, critical operation • Unclear where the boundaries belong •  Need a framework for: • Resource-constrained concurrency • Defining boundaries • Appl’n-specific processing • allow abstractions to emerge

  6. Choice of Programming Primitives • Traditional approaches • command processing loop (wait request, act, respond) • monolithic event processing • full thread/socket posix regime • Alternative • provide framework for concurrency and modularity • never poll, never block • interleaving flows, events

  7. TinyOS • Microthreaded OS (lightweight thread support) and efficient network interfaces • Two level scheduling structure • Long running tasks that can be interrupted by hardware events • Small, tightly integrated design that allows crossover of software components into hardware

  8. msg_send_done) msg_rec(type, data) Tiny OS Concepts • Scheduler + Graph of Components • constrained two-level scheduling model: threads + events • Component: • Commands • Event Handlers • Frame (storage) • Tasks (concurrency) • Constrained Storage Model • frame per component, shared stack, no heap • Very lean multithreading • Efficient Layering Events Commands send_msg(addr, type, data) power(mode) init Messaging Component internal thread Internal State TX_packet(buf) Power(mode) init RX_packet_done (buffer) TX_packet_done (success)

  9. Application = Graph of Components Route map Router Sensor Appln application Active Messages Example: ad hoc, multi-hop routing of photo sensor readings Serial Packet Radio Packet packet Temp Photo SW 3450 B code 226 B data HW UART Radio byte ADC byte clock RFM bit Graph of cooperating state machines on shared stack

  10. Radio Packet packet Radio byte byte RFM bit TOS Execution Model • commands request action • ack/nack at every boundary • call command or post task • events notify occurrence • HW interrupt at lowest level • may signal events • call commands • post tasks • tasks provide logical concurrency • preempted by events data processing application comp message-event driven active message event-driven packet-pump crc event-driven byte-pump encode/decode event-driven bit-pump

  11. Event-Driven Sensor Access Pattern • clock event handler initiates data collection • sensor signals data ready event • data event handler calls output command • device sleeps or handles other activity while waiting • conservative send/ack at component boundary command result_t StdControl.start() { return call Timer.start(TIMER_REPEAT, 200); } event result_t Timer.fired() { return call sensor.getData(); } event result_t sensor.dataReady(uint16_t data) { display(data) return SUCCESS; } SENSE LED Photo Timer

  12. TinyOS Commands and Events { ... status = call CmdName(args) ... } command CmdName(args) { ... return status; } event EvtName(args) { ... return status; } { ... status = signal EvtName(args) ... }

  13. Tasks events commands Interrupts Hardware TinyOS Execution Contexts • Events generated by interrupts preempt tasks • Tasks do not preempt tasks • Both essentially process state transitions

  14. Handling Concurrency: Async or Sync Code Async methods call only async methods (interrupts are async) Sync methods/tasks call only sync methods Potential race conditions: any update to shared state from async code any update to shared state from sync code that is also updated from async code Compiler rule: if a variable x is accessed by async code, then any access of x outside of an atomic statement is a compile-time error Race-Free Invariant: any update to shared state is either not a potential race condition (sync code only) or is within an atomic section

  15. Tasks • provide concurrency internal to a component • longer running operations • are preempted by events • not preempted by tasks • able to perform operations beyond event context • may call commands • may signal events { ... post TskName(); ... } task void TskName { ... }

  16. Typical Application Use of Tasks • event driven data acquisition • schedule task to do computational portion event result_t sensor.dataReady(uint16_t data) { putdata(data); post processData(); return SUCCESS; } task void processData() { int16_t i, sum=0; for (i=0; i ‹ maxdata; i++) sum += (rdata[i] ›› 7); display(sum ›› shiftdata); } • 128 Hz sampling rate • simple FIR filter • dynamic software tuning for centering the magnetometer signal (1208 bytes) • digital control of analog, not DSP • ADC (196 bytes)

  17. Task Scheduling • Typically simple FIFO scheduler • Bound on number of pending tasks • When idle, shuts down node except clock • Uses non-blocking task queue data structure • Simple event-driven structure + control over complete application/system graph • instead of complex task priorities and IPC

  18. Maintaining Scheduling Agility • Need logical concurrency at many levels of the graph • While meeting hard timing constraints • sample the radio in every bit window • Retain event-driven structure throughout application • Tasks extend processing outside event window • All operations are non-blocking

  19. CRCfilter RadioTiming ChannelMon The Complete Application SenseToRfm generic comm IntToRfm AMStandard RadioCRCPacket UARTnoCRCPacket packet noCRCPacket photo Timer MicaHighSpeedRadioM phototemp SecDedEncode SW byte SPIByteFIFO RandomLFSR HW ADC UART ClockC bit SlavePin

  20. Programming Syntax • TinyOS 2.0 is written in an extension of C, called nesC • Applications are too • just additional components composed with OS components • Provides syntax for TinyOS concurrency and storage model • commands, events, tasks • local frame variable • Compositional support • separation of definition and linkage • robustness through narrow interfaces and reuse • interpositioning • Whole system analysis and optimization

  21. Component Interface • logically related set of commands and events StdControl.nc interface StdControl { command result_t init(); command result_t start(); command result_t stop(); } Clock.nc interface Clock { command result_t setRate(char interval, char scale); event result_t fire(); }

  22. Component Types • Configuration • links together components to compose new component • configurations can be nested • complete “main” application is always a configuration • Module • provides code that implements one or more interfaces and internal behavior

  23. IntToRfm Photo ClockC Example of Top Level Configuration configuration SenseToRfm { // this module does not provide any interface } implementation { components Main, SenseToInt, IntToRfm, ClockC, Photo as Sensor; Main.StdControl -> SenseToInt; Main.StdControl -> IntToRfm; SenseToInt.Clock -> ClockC; SenseToInt.ADC -> Sensor; SenseToInt.ADCControl -> Sensor; SenseToInt.IntOutput -> IntToRfm; } Main StdControl SenseToInt ADCControl ADC IntOutput Clock

  24. Nested Configuration includes IntMsg; configuration IntToRfm { provides { interface IntOutput; interface StdControl; } } implementation { components IntToRfmM, GenericComm as Comm; IntOutput = IntToRfmM; StdControl = IntToRfmM; IntToRfmM.Send -> Comm.SendMsg[AM_INTMSG]; IntToRfmM.SubControl -> Comm; } StdControl IntOutput IntToRfmM SendMsg[AM_INTMSG]; SubControl GenericComm

  25. IntToRfm Module command result_t StdControl.start() { return call SubControl.start(); } command result_t StdControl.stop() { return call SubControl.stop(); } command result_t IntOutput.output(uint16_t value) { ... if (call Send.send(TOS_BCAST_ADDR, sizeof(IntMsg), &data) return SUCCESS; ... } event result_t Send.sendDone(TOS_MsgPtr msg, result_t success) { ... } } includes IntMsg; module IntToRfmM { uses { interface StdControl as SubControl; interface SendMsg as Send; } provides { interface IntOutput; interface StdControl; } } implementation { bool pending; struct TOS_Msg data; command result_t StdControl.init() { pending = FALSE; return call SubControl.init(); }

  26. Atomicity Support in nesC • Split phase operations require care to deal with pending operations • Race conditions may occur when shared state is accessed by premptible executions, e.g. when an event accesses a shared state, or when a task updates state (premptible by an event which then uses that state) • nesC supports atomic block • implemented by turning of interrupts • for efficiency, no calls are allowed in block • access to shared variable outside atomic block is not allowed

  27. Supporting HW Evolution • Distribution broken into • apps: top-level applications • tos: • lib: shared application components • system: hardware independent system components • platform: hardware dependent system components • includes HPLs and hardware.h • interfaces • tools: development support tools • contrib • beta • Component design so HW and SW look the same • example: temp component • may abstract particular channel of ADC on the microcontroller • may be a SW I2C protocol to a sensor board with digital sensor or ADC • HW/SW boundary can move up and down with minimal changes

  28. Example: Radio Byte Operation • Pipelines transmission: transmits byte while encoding next byte • Trades 1 byte of buffering for easy deadline • Encoding task must complete before byte transmission completes • Decode must complete before next byte arrives • Separates high level latencies from low level real-time rqmts … Encode Task Byte 1 Byte 2 Byte 3 Byte 4 Bit transmission start Byte 1 Byte 2 Byte 3 RFM Bits

  29. Dynamics of Events and Threads bit event => end of byte => end of packet => end of msg send thread posted to start send next message bit event filtered at byte layer radio takes clock events to detect recv

  30. bool pending; struct TOS_Msg data; command result_t IntOutput.output(uint16_t value) { IntMsg *message = (IntMsg *)data.data; if (!pending) { pending = TRUE; message->val = value; message->src = TOS_LOCAL_ADDRESS; if (call Send.send(TOS_BCAST_ADDR, sizeof(IntMsg), &data)) return SUCCESS; pending = FALSE; } return FAIL; } destination length Sending a Message • Refuses to accept command if buffer is still full or network refuses to accept send command • User component provide structured msg storage

  31. Send done Event • Send done event fans out to all potential senders • Originator determined by match • free buffer on success, retry or fail on failure • Others use the event to schedule pending communication event result_t IntOutput.sendDone(TOS_MsgPtr msg, result_t success) { if (pending && msg == &data) { pending = FALSE; signal IntOutput.outputComplete(success); } return SUCCESS; } }

  32. Receive Event • Active message automatically dispatched to associated handler • knows format, no run-time parsing • performs action on message event • Must return free buffer to the system • typically the incoming buffer if processing complete event TOS_MsgPtr ReceiveIntMsg.receive(TOS_MsgPtr m) { IntMsg *message = (IntMsg *)m->data; call IntOutput.output(message->val); return m; }

  33. Tiny Active Messages • Sending • declare buffer storage in a frame • request transmission • name a handler • handle completion signal • Receiving • declare a handler • firing a handler: automatic • Buffer management • strict ownership exchange • tx: send done event  reuse • rx: must return a buffer

  34. Tasks in Low-level Operation • transmit packet • send command schedules task to calculate CRC • task initiates byte-level data pump • events keep the pump flowing • receive packet • receive event schedules task to check CRC • task signals packet ready if OK • byte-level tx/rx • task scheduled to encode/decode each complete byte • must take less time that byte data transfer

  35. TinyOS tools • TOSSIM: a simulator for tinyos programs • ListenRaw, SerialForwarder: java tools to receive raw packets on PC from base node • Oscilloscope: java tool to visualize (sensor) data in real time • Memory usage: breaks down memory usage per component (in contrib) • Peacekeeper: detect RAM corruption due to stack overflows (in lib) • Stopwatch: tool to measure execution time of code block by timestamping at entry and exit (in osu CVS server) • Makedoc and graphviz: generate and visualize component hierarchy • Surge, Deluge, SNMS, TinyDB

  36. Scalable Simulation Environment • target platform: TOSSIM • whole application compiled for host native instruction set • event-driven execution mapped into event-driven simulator machinery • storage model mapped to thousands of virtual nodes • radio model and environmental model plugged in • bit-level fidelity • Sockets = basestation • Complete application • including GUI

  37. Simulation Scaling

  38. TinyOS 2.0: basic changes • Scheduler: improve robustness and flexibility • Reserved tasks by default ( fault tolerance) • Priority tasks • New nesC 1.2 features: • Network types enable link level cross-platform interoperability • Generic (instantiable) components, attributes, etc. • Platform definition: simplify porting • Structure OS to leverage code reuse • Decompose h/w devices into 3 layers: presentation, abstraction, device-independent • Structure common chips for reuse across platforms • so platforms are a collection of chips: msp430 + CC2420 + • Power mgmt architecture for devices controlled by resource reservation • Self-initialisation • App-level notion of instantiable services

  39. TinyOS Limitations • Static allocation allows for compile-time analysis, but can make programming harder • No support for heterogeneity • Support for other platforms (e.g. stargate) • Support for high data rate apps (e.g. acoustic beamforming) • Interoperability with other software frameworks and languages • Limited visibility • Debugging • Intra-node fault tolerance • Robustness solved in thedetails of implementation • nesC offers only some types of checking

  40. Em* • Software environment for sensor networks built from Linux-class devices • Claimed features: • Simulation and emulation tools • Modular, but not strictly layered architecture • Robust, autonomous, remote operation • Fault tolerance within node and between nodes • Reactivity to dynamics in environment and task • High visibility into system: interactive access to all services

  41. Contrasting Emstar and TinyOS • Similar design choices • programming framework • Component-based design • “Wiring together” modules into an application • event-driven • reactive to “sudden” sensor events or triggers • robustness • Nodes/system components can fail • Differences • hardware platform-dependent constraints • Emstar: Develop without optimization • TinyOS: Develop under severe resource-constraints • operating system and language choices • Emstar: easy to use C language, tightly coupled to linux (devfs,redhat,…) • TinyOS: an extended C-compiler (nesC), not wedded to any OS

  42. Em* Transparently Trades-off Scale vs. Reality Em* code runs transparently at many degrees of “reality”: high visibility debugging before low-visibility deployment

  43. Collaborative Sensor Processing Application State Sync 3d Multi- Lateration Topology Discovery Acoustic Ranging Neighbor Discovery Leader Election Reliable Unicast Time Sync Radio Audio Sensors Hardware Em* Modularity • Dependency DAG • Each module (service) • Manages a resource & resolves contention • Has a well defined interface • Has a well scoped task • Encapsulates mechanism • Exposes control of policy • Minimizes work done by client library • Application has same structure as “services”

  44. Em* Robustness • Fault isolation via multiple processes • Active process management (EmRun) • Auto-reconnect built into libraries • “Crashproofing” prevents cascading failure • Soft state design style • Services periodically refresh clients • Avoid “diff protocols” scheduling depth map path_plan EmRun camera motor_x motor_y

  45. Em* Reactivity • Event-driven software structure • React to asynchronous notification • e.g. reaction to change in neighbor list • Notification through the layers • Events percolate up • Domain-specific filtering at every level • e.g. • neighbor list membership hysteresis • time synchronization linear fit and outlier rejection scheduling path_plan notify filter motor_y

  46. EmStar Components • Tools • EmRun • EmProxy/EmView • EmTOS • Standard IPC • FUSD • Device patterns • Common Services • NeighborDiscovery • TimeSync • Routing

  47. EmRun: Manages Services • Designed to start, stop, and monitor services • EmRun config file specifies service dependencies • Starting and stopping the system • Starts up services in correct order • Can detect and restart unresponsive services • Respawns services that die • Notifies services before shutdown, enabling graceful shutdown and persistent state • Error/Debug Logging • Per-process logging to in-memory ring buffers • Configurable log levels, at run time

  48. EmSim/EmCee • Em* supports a variety of types of simulation and emulation, from simulated radio channel and sensors to emulated radio and sensor channels (ceiling array) • In all cases, the code is identical • Multiple emulated nodes run in their own spaces, on the same physical machine

  49. Emulator nodeN nodeN nodeN emproxy neighbor … linkstat motenic … Mote Mote Mote EmView/EmProxy: Visualization emview

  50. Client Server /dev/servicename /dev/fusd kfusd.o Inter-module IPC : FUSD • Creates device file interfaces • Text/Binary on same file • Standard interface • Language independent • No client library required User Kernel

More Related