1 / 9

Pose Estimation for non-cooperative Spacecraft Rendevous using CNN

Pose Estimation for non-cooperative Spacecraft Rendevous using CNN. Ryan McKennon-Kelly.

lorenzon
Télécharger la présentation

Pose Estimation for non-cooperative Spacecraft Rendevous using CNN

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Pose Estimation for non-cooperative Spacecraft Rendevous using CNN Ryan McKennon-Kelly Sharma, Sumant, Connor Beierle, and Simone D’Amico. “Pose Estimation for Non-Cooperative Spacecraft Rendezvous Using Convolutional Neural Networks,” September 19, 2018. https://arxiv.org/abs/1809.07238.

  2. Pose Estimation via Monocular Vision • Motivation: • On-board determination of pose (i.e. relative position and attitude) of target spacecraft is highly enabling for future on-orbit servicing missions • Monocular systems (e.g. single camera) are inexpensive, often of low SWAP (size, weight, and power), and therefore are easily integrated into spacecraft • Current approaches depend on classical image processing techniques to identify features • This often requires them to be ‘hand engineered’ • Critical Issues: • Robustness to adverse illumination conditions • Scarcity of datasets required for training and benchmarking • E.g. may further be infeasible on-board due to computational complexity of traditional algorithms’ evaluation of a large number of pose hypotheses

  3. Methods • Problem: Need to determine the attitude, position of Camera Reference C wrt the body frame of the target B • Proposed solution: Train a CNN for this problem • Challenge: Manual collection and labeling of large amounts of space imagery is extremely difficult • Approach: • Create a pipeline for automated generation and labeling of synthetic space imagery • Leverage Transfer Learning which pre-trains CNN on existing dataset (ImageNet) • Develop and explore several (5) separate networks against 7x separate datasets

  4. Synthetic Images Generated w/Known Pose Labels

  5. Networks Developed and Experimental Results

  6. “Nothing’s Perfect” – Comparison of High vs. Low Confidence Solutions High Confidence Solutions Low Confidence Solutions

  7. Conclusions • Size of training set correlates with accuracy • Warrants generation of larger synthetic data sets by varying the target location and orientation • Classification networks trained with some noise did better on test images with even high gaussian “white noise” • Proves that as long as the sensor noise can be modeled or removed via post-processing, the CNN has good potential for success in real-world application • All networks trained using transfer learning – which only required training the last few layers • Proves several low-level features of spaceborne imagery are also present in terrestrial objects • Also proves there is no need to train a network completely from scratch • Accuracy of the network trained on the largest dataset (75k images w/3000 pose labels) higher than classical feature detection algorithms (3x “high confidence” solutions) • Shows promise of methods • Several caveats representing potential for future development: • Networks need to be tested with on-orbit imagery • Larger dataset required for comprehensive comparative of CNN-based pose determination vs. conventional techniques • Performance w/other spacecraft or orbit regimes untested • Ties of pose estimation to navigation performance needs to be explored

More Related