1 / 86

Preliminary Detailed Design Review

Preliminary Detailed Design Review. Virtual Cane. Agenda. Task List Completion Plan Concept Breakdown Use Cases Deep Dive Input Processing Output. Potential Concepts. Task List. Completion Plan. Concept Breakdown. Definition. Concept Chosen.

velvet
Télécharger la présentation

Preliminary Detailed Design Review

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Preliminary Detailed Design Review Virtual Cane

  2. Agenda • Task List • Completion Plan • Concept Breakdown • Use Cases • Deep Dive • Input • Processing • Output • Potential Concepts

  3. Task List

  4. Completion Plan

  5. Concept Breakdown

  6. Definition Concept Chosen Set Reference Points: as part of calibration -Deviced required -User defined Identify Reference Point: - Compare to table of reference point - Compare to table of machine learning User Input: button, Slider Slider: volume Button: feedback, identification, calibration Auditory Feedback: Chirps: Obstacle Avoidance, RP 3D sound Speech: Reference point verbal Haptic Feedback: Can be used for Obstacle Avoidance. Most likely out of scope for this iteration.

  7. Overview Use Case (normal flow)

  8. Overview Use Case (normal flow)

  9. Use Case 1 of 8 Action 1: Initial Setup (Prerequisite) Assumption: • Device charged and functional • Device securely placed • Device Calibration (360 view of room and a reference point) • User set a custom reference point (i.eTV)

  10. Use Case 2 of 8 Action 2: User walks to an unknown room • Subsystems: User Input Interface, Obstacle Avoidance, and Housing • Obstacle Avoidance activated

  11. Use Case 3 of 8 Action 3: User Press Localization Button • Subsystems: User Input Interface, Obstacle Avoidance, and Housing • Braille is used to Identify each button

  12. Use Case 4 of 8 Action 4: Image Comparison • Subsystems: Environmental Input, Data Storage Processing, Data Normalization, and Obstacle Avoidance. • The Image is compared to a 360 view of the room.

  13. Use Case 5 of 8 Action 5: Feedback is given to user • Subsystems: • Data Storage Processing • Kinematic Measures • Feedback • User Triggered Output • Obstacle Avoidance

  14. Use Case 6 of 8 Action 6: User Changes Direction • Subsystems: • Kinematic Measures • User Input Interface • Post Processing • Obstacle avoidance

  15. Use Case 7 of 8 Action 7: User Press Orientation Button • Subsystems: • User Input Interface • Obstacle Avoidance • Housing

  16. Use Case 8 of 8 Action 8: Identify the change of angle • Subsystems: • Data Storage Processing • Kinematic Measures • Feedback,User Triggered Output • Obstacle Avoidance

  17. Component Deep Dive: Input

  18. Definition Input: Kinematic Measures Purpose: • Track user head/body movements • Record users direction/facing Critical as it defines ½ of localization problem Requirements: • Minimum 3 DOF data acquisition • Must be able to track a user’s 360 degree rotation

  19. Description Input: Kinematic Measures • In order to maintain precise auditory cues even while not maintaining visuals of the reference point, it will be necessary to acquire: left/right head turn, up/down head turn, X, Y and Z offsets of the user. • With an Inertial Measurement Unit (IMU) a full 9DOF can be measured • Gyroscope will handle L/R and U/D head turn. • Accelerometer will track x,y,z offsets made by the user. • Magnetometer can help record the user’s facing, I.E. where are they looking

  20. Selection/ Testing Input: Kinematic Measures Adafruit 9-DOF IMU Breakout • Extensively used throughout prototyping • Accelerometer readings in m/s^2 • Gyroscope reading in rad/s • Magnetometer reading uT • Orientation reading in degrees • Prebuilt libraries for interfacing and measurement capture

  21. Risks Input: Kinematic Measures • Interfacing with Raspberry Pi • Both can use SPI but is as of yet untested • Current IMU has deadzone • Datasheet assures IMU on order has full 360 degree reading • Current IMU has been discontinued • Version on order is only a hardware update, All Libraries should still work

  22. Definition Input: Environmental Input • What Camera • how many? • We Found Two Main Solutions • Single View Camera w/ built in depth detection (I.E Kinect Depth Sensor) • Two camera setup, (Triangulation)

  23. Description Input: Environmental Input (Single Camera) • Single view Depth cameras use a Histogram of IR dots (Top Image) where each set of dots has unique code representing a distance • This generates a Depth Map image which is separate from the normal camera image. (I.E one camera creates two unique sets of data)

  24. Description Input: Environmental Input (Triangulation) • Single point in space appears in different pixel location for each camera • Camera 1 pixel gets correlated to Camera 2 pixel • Triangulation between three points, assuming user is centered between the cameras.

  25. Selection Input: Environmental Input Single View: • Locally generates depth data • 1 camera • Expensive • Two unique sets of data will need to analyzed • Limited options and modularity Two Camera (Our Choice): • Depth info needs to be mathematically generated • Two cameras, using power, and I/O • Inate communication with Rasp PI and prebuilt libraries • Secondary camera is only needed sparingly.

  26. Component Deep Dive: Processing

  27. Background Processing: Data Normalization In general an image is represented as an MxN matrix. This matrix has many different resolutions or sizes 128x128, 256x256, 512x512, 640x480. An entry in this MxN matrix represents a light intensity: • binary scale - 0 (dark) and 1(light) • gray scale level - 0 (black) to 255 (white) • Color - 3 different representation of a grayscale image • 0 (black) to 255 (Red) • 0 (black) to 255 (Blue) • 0 (black) to 255 (Green)

  28. Background Processing: Data Normalization Essentially our data can be described as a two dimensional spatial representation of our 3-D dimensional world. In capturing this data, a camera is used. The way the camera forms the picture can be modeled using two approaches: • Perspective Projection (Pin Hole) • Orthographic Projection

  29. By Modelling image formation using the pinhole approach, relative measurement of distances can be obtained. Where f/z represents a weighted distance measure and X represents the light intensity in the real world and x represent a scaled light intensity of the real world.

  30. Background Processing: Data Normalization Many design decisions need to be made regarding the sampling rate of the camera intensity capture and how many bits will be used to represent this intensity.

  31. Definition Processing: Data Normalization Additionally this camera is going to capture this image with noise.

  32. Background Summary Processing: Data Normalization • In general an image is represented as an MxN matrix. This matrix has many different resolutions 128x128, 256x256, 512x512, 640x480. • An entry in this MxN matrix represents a light intensity value gray level 0 (black) to 255 (white). • This information can be encoded in 3 different possible forms binary, gray scale and color. • Many formats are used to help transport and compress images. These formats are TIF, PGM, PBM, GIF and JPEG • An Image has noise and can be formed in two different ways and sampled at infinitely many rates.

  33. Definition Processing: Data Normalization • From the previous slides, it is seen that our data type needs to be defined. The parameters for image size, image scale, image formation, image sampling, image format, image noise, etc. all need to be defined to give the algorithm a common basis for comparison. • This is what the data Normalization system does, it defines all restrictions and parameters on an obtained image so that images, which in our case are reference points can match.

  34. Description Processing: Data Normalization The following parameters have been defined for this design: • Image noise: Image noise is going to be modeled as Gaussian noise and it is assumed to be superimposed on the actual value.

  35. Description Processing: Data Normalization The following parameters have been defined for this design: • Image noise: Image noise is going to be modeled as Gaussian noise and it is assumed to be superimposed on the actual value.

  36. Description Processing: Data Normalization The following parameters have been defined for this design: Method of noise filtering: Two types of filtered will be applied the median filter and the gaussian filter.

  37. Description Processing: Data Normalization The following parameters have been defined for this design: Method of noise filtering: Two types of filtered will be applied the median filter and the gaussian filter.

  38. Description Processing: Data Normalization Properties of Gaussian • Most Common natural model • Smooth function, it has an infinite number of derivatives • Fourier transform of a Gaussian is a Gaussian • Convolution of a Gaussian with itself is a Gaussian. • There are cells in the eye that perform Gaussian filtering Properties of Median filter • Removes salt and pepper noise. • Preserves Edges

  39. Description Processing: Data Normalization The next parameter aids the matching algorithm utilized in the Data storage processing phase. Post Processing Filter: A high pass filter is finally applied to the image to help pronounce the edges.

  40. Description Processing: Data Normalization Other parameters such as the • Sampling rate - dependent on camera chosen • Size of the image - dependent on camera chosen • Form - dependent on camera chosen • Format - may/may not depend on camera chosen Will be configured and defined during prototyping.

  41. Testing Processing: Data Normalization

  42. Risks Processing: Data Normalization • Filtering may remove critical features from the image • Filtering may cause loss of information at the boundary • The filter parameters may be good for one environment and not for another • This process may take a significant amount of time and slow down the device, this is an essential function that needs to be done in real time.

  43. Definition Processing: Data Storage Processing - Sensor Interface The data storage processing acts as a mini database management system that deals with managing storage space in the device, interfacing with external sources of data, and matching items in the database. The design of this system also includes the schema of the database.

  44. Description Processing: Data Storage Processing - Sensor Interface The data storage processing system focuses on designing a data structure/schema with sufficient attributes to help the post processing phase make decisions. The current design being followed requires the images being stored to represent a 360 degree of the location.

  45. Description Processing: Data Storage Processing - Sensor Interface Successful application have shown that this method works and answers our most fundamental question of:

  46. Description Processing: Data Storage Processing - Sensor Interface Another successful application is shown:

  47. Description Processing: Data Storage Processing - Sensor Interface Each image will have other meta data attached to it to help, the post processing stage.

More Related