1 / 45

RFID Ecosystem

RFID Ecosystem. Robert Spies In collaboration with Magda Balazinska, Gaetano Borriello, Travis Kriplean, Evan Welbourne Garret Cole, Patricia Lee, Caitlin Lustig, Jordan Walke. Overview. Description of RFID Research Project Overview System Design System Deployment Conclusion/Questions.

paniz
Télécharger la présentation

RFID Ecosystem

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. RFID Ecosystem Robert Spies In collaboration with Magda Balazinska, Gaetano Borriello, Travis Kriplean, Evan Welbourne Garret Cole, Patricia Lee, Caitlin Lustig, Jordan Walke

  2. Overview • Description of RFID • Research Project Overview • System Design • System Deployment • Conclusion/Questions

  3. What is RFID? • Radio Frequency Identification • RFID systems comprise tags and readers • Tags are placed on objects • Readers interrogate tag through RF • Does not require line-of-sight • Tags are Small and cheap (~$.25/tag) • Tag IDs able to uniquely identify every object in the world (Current tags use 64 to 128 bits) • Can include other information besides ID • Current State • Location • History

  4. RFID Basics • Tags can be active or passive • Active tags: battery, expanded capability, longer range • Passive tag: receives power from RF field, limited capability • What is means to not have battery table? Active Tag Passive Tag

  5. Why does RFID Matter? "The value of RFID is not within the physics—the real value depends on how you create intelligence from all the data you capture.” - Richard Wirt, Intel Senior Fellow

  6. Barcode vs. RFID • RFID tags are expected to replace the UPC barcode • Unlike barcode scanners, RFID readers do not require a line of sight • Tags are less susceptible to damage • RFID readers can both read and write data • Leads to many advantages in supply chain automation • There are Privacy concerns… • More on those later • Scan many rfid tags at once, rfid tags id object, not class of obj

  7. Application Area: the Supply Chain • RFID is expected to replace the UPC bar-code in the supply chain at the case level • Focus on distribution channels • Goal is item-level tagging • Ability to track inventory • Automated checkouts • Recalls • Ability to write information directly onto product • ~100 Wal-Mart suppliers use RFID tags • Best Buy, Target, and DoD are also issuing RFID-related mandates • Collapse into one slide, with multiple examples

  8. Application Area: Passports • US passports now issued with RFID tags. • Chip contains the same information as the printed document (name, photo, etc…) • Goal is to allow easy scanning of the passport and cross-referencing of security database • Worries from privacy advocates about amount of information available to identity thieves, terrorist, etc. • Data is encrypted • Security firm was able to crack the encryption on Dutch RFID passports • Case Provides an “RF Shield” Image source: http://www.msnbc.msn.com/id/11748876/

  9. Areas of Exploration • Question: What are the implications– for technology, business and society—of having a “number on everything”? • What issues do we have to address to enable RFID-based consumer applications • Privacy is a major issue • Deployments, utility, ease-of-use, etc. • Design a system with a centralized database to explore the tradeoffs between user privacy and system utility • Try to scale back verbosity

  10. What we are Building • We are building a privacy-centric distributed system for RFID-based applications

  11. Populating the Allen Center with Readers: Initial Deployment • 33 readers, 132 antennas • Placed on floors 2-6 • Good start, but still inadequate; key areas are not covered • Floor 1, elevators, etc. • Focused on occupants of upper floors Key Antenna Reader

  12. System Design Goals • User Privacy Paramount • Ease of Use • API presented to user facilitates application development • Customization • Event updates and definitions • Robustness • Scalability

  13. Privacy Model • Basic working model known as Physical Access Control (PAC) • A user is only allowed access to Tag Read Events (TRE’s) that they could have observed • System determines line of sight interactions • Special “person” tag determines user’s location • Restricts user from even seeing TRE’s for tags they own if they are not near them • Gives user “Perfect Memory” • Database contains per user TRE tables • Each user has their own TRE table. • The user can view the TRE’s from that table and that table alone • Replace with animation, explore trade offs, then intro pac

  14. Privacy Model Cont. • Tag Id’s hashed before being stored in the system • Makes it more difficult for an adversary to infer meaning from database tables if database is compromised • Users can label tags as public or private, PAC respects this • Users can purge tag data at any time through a provided API • All TRE’s from unregistered tags are discarded • CSE kerberos authentication required to access data • Instead of one big table, many little tables, advantages in implementain, own table v views.

  15. PAC example

  16. Ease of Use • Beyond raw tag reads, the RFID Ecosystem will provide higher-level inferences about tag data. • For applications, we provide an xml based API accessible over both a socket and web connection • For users, a tag programming application is provided that allows users add tags to the system and alter metadata about their tags • Whether its private, the description, etc. • Two sections, ease of use for the programer, ease of use for users gui

  17. Robustness and Scalability • All servers in the system actively work to re-establish lost connections. • If work load is too large, each process in the system can be replication on another machine to reduce the work done on a single computer • All input servers are determined at runtime from a database. Additional servers can be added on the fly.

  18. Interface Servers • Compute higher level events • Store higher level event history to local DB • Support API • Stream higher level events to application • Respond to application queries • Cluster Servers • Implementation of PAC • Stores TRE’s in database • Stores system metadata in database • Node Servers • Control reader hardware • Collects TRE’s and forwards to cluster servers • Data cleansing • Slide of whatg this came from

  19. Reason for 3-tiered Architecture • Node server layer • Used to control reader hardware • Enables low level data cleansing • Cluster server layer • Need to combine streams of TRE’s to determine collocation essential to PAC • Interface server layer • Isolates computational resources • Isolates API queries and allows resource replication if necessary • scalability

  20. Node Server • Control reader hardware • Collects TRE’s and forwards a tuple of the form {tagid, antenna_id, timestamp} to Cluster Servers • Future Goals: Include low level stream cleaning • Unreliable, summarizartion • How much data, introduce earlier

  21. Cluster Server • The database server(s) • Database contains table of TREs for each PAC user • Metadata tables as well • Information about the objects tags are on • Reader and antenna information • etc.

  22. Cluster Server (Cont.) • Receives each TRE from Node Server and propagates through Access Control Switch (ACS) • ACS contains implementation of PAC • Determines which user can see which TRE, and stores TREs in appropriate PAC user tables. • For each User-TRE pair, Cluster server forwards {user, tre} tuple to Interface Servers Interface Server Cluster Server {wilford, {-43254323532, 77, 11745617000}} {evan, {-43254323532, 77, 11745617000}} {-43254323532, 77, 11745617000} {gbc3, {-43254323532, 77, 11745617000}} …

  23. Interface Server • Compute higher-level events based on raw TRE stream from cluster servers • Maintain connections with applications • Supports a push and pull based API • Events are pushed to the Applications when computed • Applications can query data from the Ecosystem

  24. Interface Server: Push API • Events Computed by Interface Server pushed to user • The lowest level event computed is TagAtLocation • Per antenna, sends the application an alert when a tag is first seen at an antenna, and then when it has left the antenna • Essential due to the high amount of TREs generated • Most user do not care about every TRE generated, but do care about entrance and exit events • How to computer end event • Talk about how this was most of my work that I did or in summary slide Start Event <event_update> <tag_at_location> <event_type>start</event_type> <tag_id>5436234543</tag_id> <location>88</location> <timestamp>117123879229</timestamp> </tag_at_location> </event_update> End Event <event_update> <tag_at_location> <event_type>end</event_type> <tag_id>5436234543</tag_id> <location>88</location> <timestamp>1171239700000</timestamp> </tag_at_location> </event_update>

  25. Interface Server: Event Hierarchy • From low level events, we can infer more complex interactions • Provides a hierarchical event structure • Processes (what we anticipate will be) common use cases PersonAssociation PersonContact PersonAtLocation TagAtLocation

  26. Interface Server: Borealis • Borealis [MIT, Brandeis, Brown] stream processing engine • For real-time processing of sensor data. • Allows users to define their own events over the TREs, and then deploy this to Borealis via the Interface Server’s API • Allows users to customize events • Borealis “Event Definitions” are xml formatted files, specified by Borealis own public API

  27. Interface Server: Event Streams • Each event is computed per user • Interface Server receives {user, tre} tuple for each user that is allowed to see a tre • Results in a logical event stream for each user • This is because PAC dictates when each user can see a TRE • Duplicate events will be computed for different user

  28. Interface Server: Event Streams • Blue User holding blue_tag, Yellow User holding yellow_tag • Users meet at Antenna 88 at time t1 • PAC detects this, begins streaming tuples of the form: {blue_user ,{blue_tag, 88, t1}} {blue_user ,{yellow_tag, 88, t1}} {yellow_user ,{blue_tag, 88, t1}} {yellow_user ,{yellow_tag, 88, t1}} • This starts TagAtLocation events for each user Blue User Yellow User Tag Location Tag Location blue_tag 88 blue_tag 88 Yellow_tag 88 yellow_tag 88

  29. Interface Server: Event Streams • Sometime later at time tΔ later Yellow user has moved to antenna 87 while Blue User remains at 88 • PAC detects that there is no longer line of sight between Yellow User and Blue User, stops sending tuples of the form {blue_user ,{yellow_tag, 88, tΔ }} {yellow_user ,{blue_tag, 88, tΔ }} • But Blue User’s TagAtLocation for blue_tag still persists. And Yellow User now has a TagAtLocation event for yellow_tag at location 87. • To respect the privacy model, Blue User and Yellow User must not know of each others TagAtLocation events Blue User Yellow User Tag Location Tag Location blue_tag 88 yellow_tag 87

  30. Interface Server API: Pull based • API also allows pull based model. Users can query the ecosystem for historical data. • Access to data such as TREs (per user), antenna and reader metadata, object metadata • Also allows updates on this information: • Delete all tag reads from last Tuesday to today • Change an objects description • Change the object a tag is placed on

  31. Interface Server API: Canned Queries • Predefined queries for ease of use • Specially formatted query string • Interface Server responsible for parsing the parameters • Interface Server converts query string into SQL and runs query over database • Returns xml formatted String • Get Object Metadata (per user) • Ex. query=GET_OBJECT_METADATA • Get Raw TRE’s • Take parameters that can specify start time, end time, antenna id, and tag id • Ex. query=GET_RAW_TAG_DATA&ant_id=88&start=17087676 • Ex. query=GET_RAW_TAG_DATA&distinct&tag_id=11233212332 &ant_id&start=17087676 • Get Reader and Antenna Metadata • Ex. query=GET_OBJEC T_METADATA

  32. Interface Server: Custom Queries • Canned queries inadequate to cover queries a users interested in • API allows user to write their own SQL queries over the database • Database schema made public, but names changed • Allows us to alter underlying schema without breaking users queries • Also enables us to protect tables and data that the users should not see about the ecosystem or each other (more on that in a bit)

  33. Interface Server: DB Schema Actual Schema API Schema object_metadata( object_id int, type_id int, owner varchar(20), personal boolean, description varchar(160), ) objects ( object_id int, type_id int, user varchar(20), personal boolean, description varchar(160), ) pac_wilford( tag_id bigint, ant_id int, timestamp bigint, rssi I nt ) tag_reads( tag_id bigint, ant_id int, timestamp bigint, rssi I nt )

  34. Interface Server: Custom Queries When a custom query is received: • Checks if the query contains any actual names of meaning in the database: if it does, throws out query • Then maps symbolic API names to actual database table names • Can be simple mapping: objects -> object_metadata • Special cases: tag_reads -> pac_wilford (requires determining who the identity of the user) • Parse the query and add necessary constraints • Ex. User should only be able to see object metadata about objects they own. • object_metadata is a common table • Parse the query and add necessary constraints wherever object_metadata is accessed • Need to deal with complex cases such as subqueries, inner joins, aliases, etc. • Run the query and return the results as an xml formatted string Ex:put on other slide english version of query Query Sent to Interface Server: Select x.description from objects as x where x.id in (select obj.id from objects as obj) After Transformation: Select x.description from object_metadata as x where x.id in (select obj.id from object_metadata as obj where obj.owner=‘wilford’) and x.owner=‘wilford’

  35. Interface Server: Connections • Interface Server handles both secure socket and http connections • Tomcat Apache used for web front • Host info and port numbers publicly available • Authenticates users with CSE Kerberos • For event streaming on the web front, utilizes a relatively new technology to do server push • Client and browser maintain persistent http connection • Only available on select browsers: • We only support Mozilla 1.5 and greater at this time • Not Available for IE yet • ASK GARRET ABOUT THIS

  36. User Case: Visual Object Tracking < LIVE DEMO! >

  37. User Case: Visual Object Tracking Introduce map and what everything means before running demo

  38. User Case: Visual Object Tracking </ Live Demo!> (Sorry, had to do it)

  39. Application: Tag Info Editor

  40. Use Case: Social History • Query Personal History • Paths Walked, Objects Seen, People Seen With • Where was my bag last seen at?

  41. User Case: Support Application • Support interested in an application that helps with inventory management • Includes tagging every object of value owned by the CS department • Inventory tracking extremely easy • Where is Laptop X, and/or where has it been? • Allows a level of security and asset protection • Detect movement events • Is an asset on the move? Ensure that with an authorized person • Security Alerts: “Why is this computer leaving the building?” Alert the security guard appropriately.

  42. The Future • Explore low level data aggregation • A tag sitting under a reading will generate ~50 TRE’s a second, ~300 TRE’s a minute, ~180000 an hour… • Fix your math • Generates a large amount of relatively uninteresting information. • Explore expanding window solutions: A TRE per second, then per two seconds, etc. • But essential to catch when the tag is no longer under the reader!

  43. The Near Future • Continue Populate Allen Center with readers • Achieve better coverage • Full scale test of system over the summer • Include participants not in the Research group • Explore the benefits and limitations of PAC • Open the RFID Ecosystem for applications

  44. Questions?

  45. Talk about numbers of reads per second. • Take out implementation of pac • Change diagram for pac demo. Annimated map demo. Have people pop up on map. Change discussion of “why pac.” talk about discussions of extremes. • Shorten custom queries section • Maintains ssl connections on node server, authentication • Bring up main point of each slide or slides, bring up outline and how this relates to the bigger picture. • Summary slide • Look at slide crowding remove a lot of text • Show base level architecture, and then how is grows with our implementation and how what we do expands this

More Related