1 / 11

Lessons Learned for Science Processing

Lessons Learned for Science Processing. Phil Callahan March 13, 2006. Overview. Background Measurement System Engineer Role Science Team Participation Processing Testbed Consistency: Terms, Units, Corrections, Data Flagging Figure out what the data mean

maris
Télécharger la présentation

Lessons Learned for Science Processing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lessons Learned for Science Processing Phil Callahan March 13, 2006

  2. Overview • Background • Measurement System Engineer Role • Science Team Participation • Processing Testbed • Consistency: Terms, Units, Corrections, Data Flagging • Figure out what the data mean • Have more than adequate computing power • Deliver documents early, data soon after sensor turn on

  3. Altimeter Measurement System Measurement Sys Eng Role • Maintain Error Budget • Liaison w/ Science Team • Product Definition • Algorithm Development • Calibration / Validation • Focal point for questions, complaints from data users Photo courtesy of JPL/NASA

  4. Science Team Participation • Engaged with Project management, engineering aspects, science team members • Participate in • Data product definition • Algorithm definition / development • Calibration / Validation • Continuing interaction with Project throughout the mission • Publish • Public Outreach

  5. Processing Testbed • Build early during algorithm development to define, test algorithms • Add software backbone as available • Use real products • Process test or simulated data • Process instrument test data as far forward as possible • Push simulated data backwards as far as possible • Use outputs to test final processing system • Will find bugs in both, but overall beneficial • Update and use throughout mission • Especially valuable during Cal/Val to try fixes, new constants • Data quality monitoring, quick-look processing

  6. Consistency • Terms • Common, logical meanings, but make distinctions where useful • Example: Height Vs Range Vs Altitude • Units – in Products, among Algorithms • Correction and Sign Convention • Corrections ADD to value to bring closer to truth • Flag Convention and Design through entire processing chain • Design early, Use in Testbed • Document clearly for users – flags at later stages of processing depend on those earlier and may not be meaningful • Separate data and flags (avoid “flag values”); output calculated value if possible • Example: Bad(1) until test Good(0), clear spares at end

  7. El Nino – Painting the Pacific Photo courtesy of JPL/NASA

  8. Figure out what the data mean / If you don’t understand an answer – Ask • Waveform features • TOPEX Oscillator drift error • SWH drift as PTR changed • Tide Gauge Calibration • Trust, but Verify

  9. TOPEX Waveforms

  10. Computing Power • Computer hardware is cheap compared to people’s time • Being able to process, reprocess, and reprocess again is extremely important during Cal/Val • Having a substantial amount of data, at least all of the Cal/Val period, on line is crucial • Separate Development, Integration & Test, Operational Systems • Aim for ~10X throughput • After ~ 2 yrs, reprocess all data in <~3 months

  11. TOPEX – Jason-1 – Jason-2: 15+ yr Record Photo courtesy of JPL/NASA

More Related