1 / 55

SPP/FIELDS TDS Flight Software Preliminary Design Review

SPP/FIELDS TDS Flight Software Preliminary Design Review. Jason Hinze University of Minnesota. Introduction: TDS in Context – FIELDS Block Diagram. Introduction: Sample TDS Events from STEREO. Introduction: Related Documents. GI ICD APL 7434-9066_SPP_GI_ICD

eyad
Télécharger la présentation

SPP/FIELDS TDS Flight Software Preliminary Design Review

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SPP/FIELDSTDS Flight SoftwarePreliminary Design Review Jason Hinze University of Minnesota

  2. Introduction:TDS in Context – FIELDS Block Diagram

  3. Introduction:Sample TDS Events from STEREO

  4. Introduction:Related Documents GI ICD APL 7434-9066_SPP_GI_ICD FIELDS ICD APL 7434-9055_FIELDS_ICD CDI ICD FIELDS SPF_MEP_100_CDI_ICD TDS ICD FIELDS SPF_MEP_101_TDS_ICD MAG ICD FIELDS SPF_MEP_103_MAG_ICD AEB ICD FIELDS SPF_MEP_104_AEB_ICD SWEAP ICD FIELDS SPF_MEP_105_SWEAP_ICD LNPS ICD FIELDS SPF_MEP_106_LNPS_ICD TDS FSW SDP UMN SPF_TDS_002_SDP TDS FSW SRS UMN SPF_TDS_004_SRS

  5. Requirements:Be a Good Citizen of the Spacecraft • Don’t wear out the EEPROM. • Configure to a known state at power on. • Support remote load-to/dump-from RAM and EEPROM. • Implement a commandable safe mode. • Talk nice with the spacecraft. • Implement the communications protocol, including Virtual 1PPS and ITF protocol processing. • Receive and process the “Spacecraft Time and Status Message” packet. • Receive telecommands. • Send the “Instrument Critical Housekeeping Data” packet. • Send telemetry.

  6. Requirements:Be a Good Citizen of FIELDS • Manage failover with the DCB. • Talk nice with the DCB. • Implement the communications protocol, including Sample/Spacecraft Clock Message handling and CDI protocol processing. • Receive telecommands. • Send telemetry. • Control and monitor MAGi, SWEAP, AEB2, LNPS2. • Tell MAGi and SWEAP the time. • Send the Coordinated Burst Signal and a variety of reduced magnetometer data products to interested parties.

  7. Requirements:Produce Good Science Data • Time-tag science data accurately and precisely. • Collect, reduce, and send magnetometer science data (DC B-field). • Time decimated data to spacecraft • Raw compressed data to DCB • Collect, evaluate, rank, and send TDS event data (AC E-field, AC B-field, SWEAP counts). • Raw compressed data to DCB normally • Reduced compressed data to spacecraft when DCB is offline

  8. Design:TDS Hardware Overview

  9. Design:Software Architecture Diagram

  10. Design:Spacecraft Interface Subsystem • Implements the protocol for communication with the spacecraft. • Hardware – A/B UART RX/TX; virtual 1PPS detection & timestamping • Software – ITF protocol processing • Processes Spacecraft Time and Status Message. • Sends V1PPS timestamp and associated MET to Real-Time Clock subsystem. • Makes status information available to other subsystems. • Sends Instrument Critical Housekeeping Data as directed by Instrument Status subsystem. • Sends telemetry as directed by Telemetry Manager subsystem. • Routed to active A/B UART as determined by source of latest Spacecraft Time and Status Message • Receives telecommands and forwards them to the Command Manager subsystem.

  11. Design:DCB Interface Subsystem • Implements the slave CDI protocol for communication with the DCB. • Hardware – sCDI RX/TX; Sample Clock Message (F0) detection & timestamping • Software – sCDIprotocol processing • Receives and processes the Spacecraft Clock Message (F1,F2,F3). • Send F0 timestamp and associated MET to Real-Time Clock subsystem. • Sends telemetry as directed by Telemetry Manager subsystem. • Receives telecommands and forwards them to the Command Manager subsystem. • Detects DCB failures and recoveries. • Failure: TBD (~30s?) consecutive missing or bad Spacecraft Clock Messages • Recovery: TBD (~30s?) consecutive good Spacecraft Clock Messages

  12. Design:MAGi Interface Subsystem • Implements the master CDI protocol for communication with MAGi. • Hardware – mCDIRX/TX; Sample Clock Message (F0) generation • Software – mCDIprotocol processing • Sends the Spacecraft Clock Message (F1, F2, F3) to MAGi. • Sends mode commands to MAGi. • Collects raw magnetometer data from MAGi. • Reduces and sends/exposes magnetometer data products as follows: • Generates telemetry for the spacecraft from time decimated data. • Generates flash/archive telemetry for the DCB from raw data. • Generates data reduced for the Shared Burst information and makes it available to the Instrument Status subsystem. • Generates data reduced for SWEAP and makes it available to the SWEAP Interface subsystem. • Generates data reduced for the FIELDS Coordinated Burst Signal and makes it available to the DCB Interface subsystem.

  13. Design:SWEAP Interface Subsystem • Implements the master CDI protocol for communication with SWEAP. • Hardware – mCDI RX/TX; Sample Clock Message (F0) generation • Software – mCDI protocol processing • Sends the Spacecraft Clock Message (F1, F2, F3) to SWEAP. • Sends the SWEAP-format magnetometer data vector to SWEAP. • Sends the “Shared Burst information” data to SWEAP.

  14. Design:AEB2 Interface Subsystem • Sets the values for the DACs on AEB2. • Current and voltage biasing • Controls the relays on PA3 and PA4 (through AEB2). • Collects the analog housekeeping signals for AEB2.

  15. Design:LNPS2 Interface Subsystem • Controls the MAGi power switch on LNPS2. • Collects the analog housekeeping signals for LNPS2.

  16. Design:Failover Management • On detection of DCB failure: • The Sample Clock is allowed to free-run, with a sub-cycle period of counts equivalent to one sample cycle (~0.874s). Note that the sub-cycle portion of the counter is normally zeroed upon receipt of the Sample Clock Message (F0). • The telemetry stream to the DCB is paused. • The Real-Time Clock subsystem uses the V1PPS timestamp and associated MET (from the spacecraft) for MET-related calculations. • For long (TBD) failures (or by command) increase telemetry bitrate to the spacecraft interface. • On detection of DCB recovery: • The Sample Clock returns to normal functioning, being zeroed upon receipt of the Sample Clock Message (F0). • The telemetry stream to the DCB is resumed. • The Real-Time Clock subsystem uses the Send F0 timestamp and associated MET (from the DCB) for MET-related calculations.

  17. Design:Real-Time Clock Subsystem • Maintains a monotonically increasing standalone real-time clock. • V1PPS and F0 timestamps are in RTC counts. • Receives MET reference information from two sources: • Receives V1PPS timestamp and Spacecraft Time and Status Message MET from Spacecraft Interface subsystem. • Receives F0 timestamp and Spacecraft Clock Message MET from DCB Interface subsystem. • The current failover mode determines which MET reference information is used when METs are calculated. • In slave mode, DCB-sourced MET reference is used. • In master mode, spacecraft-sourced MET reference is used. • MET reference information is used to calculate MET from either RTC or Sample Clock values. • e.g. to get MET for CCSDS header of science data packet which would be internally timestamped with a sample clock

  18. Design:TDS Acquisition • Controls analog mux settings to select sources for channels 1-5. • Divides TDS Event RAM into TDS event buffers (event sized chunks). • Maintains metadata for the TDS event buffers. • Quality • Datation timestamp • Current mux settings • Note that metadata lives in GP ECC RAM. • Calculates the quality of newly-acquired events. • Quality can be adjusted well after the acquisition. • Feeds the addresses of low-quality TDS event buffers to the TDS acquisition hardware for recycling. • When sufficient downlink allocation has been accumulated, telemeters highest-quality TDS event buffer. • When complete, buffer is marked ‘empty’, which is the lowest quality possible.

  19. Design:Interrupt and Task Management • Actions requiring hard real time response are done in hardware and set up / post-processed in software whenever possible, e.g.: • timestamping incoming V1PPS and F0 • triggering outgoing F0 • Prefer to use interrupts as notifications of software work to be done, rather than replacements for hardware. • ISR’s preferred job is to post a task and possibly configure hardware for the next action. • ISR should be quick enough to not starve other ISRs or tasks unnecessarily: • Maximum recommended duration: ~10µs or ~100 cycles. • Most software work will be done in tasks. • Task means “a single task” not “thread of execution”. • Tasks are posted on and dispatched from a priority queue. • Tasks should be quick enough to not starve other tasks unnecessarily. • Maximum recommended duration: ~1ms or ~10,000 cycles.

  20. Design Maturity:STEREO TDS FSW Heritage • Software architecture pattern is highly similar. Differences: • On SPP, we have two “bosses”: Spacecraft and DCB. • On SPP, we do not have 1553 intermediate layer. • Many of the subsystems/modules have highly similar semantics. • TDS Acquisition • Telemetry Manager • Command Manager • Housekeeping • Instrument Status Reporting

  21. Design Maturity:STEREO TDS FSW Architecture Diagram

  22. Metrics:Boot Prom Usage • We expect the complexity of our boot / safe mode code on SPP to be similar to STEREO. • On SPP, we have two bosses to talk to. (more complex) • On SPP, we don’t have a 1553 interface. (less complex) • STEREO/WAVES DPU boot code size: 27.5kB • Estimated SPP/FIELDS TDS boot prom size: 27.5kB • SPP/FIELDS TDS boot prom size: 32kB • Margin: 14%

  23. Metrics:EEPROM Usage • We expect the complexity of our operational code on SPP to be similar to STEREO. • On SPP, we have two bosses to talk to. (more complex) • On SPP, we don’t have a 1553 interface. (less complex) • On SPP, we have a similar number and type of children and peers. • STEREO/WAVES DPU operational code size: 228kB • *BUT* 130kB of this was the RTEMS RTOS; 98kB was our code. • On SPP, we will use a lightweight task dispatcher (10-20kB). • So, we estimate our final operational code size to be 110-120kB. • Estimated SPP/FIELDS TDS operational code size: ~115kB • SPP/FIELDS TDS EEPROM size: 512kB • If we divide EEPROM into four banks, margin = 10%. • If we divide EEPROM into two banks, margin = 55%.

  24. Metrics:GP RAM Usage • We expect our non-executable general-purpose (GP) RAM usage on SPP to be similar to STEREO. • On SPP, we have a similar number and complexity of code modules. • On SPP, we have a similar amount of non-TDS-event science data to store in general-purpose RAM. • STEREO/WAVES DPU non-executable GP RAM usage: 412kB • Estimated SPP/FIELDS TDS operational code size: 115kB • Total estimated SPP/FIELDS TDS GP RAM usage: 527kB • SPP/FIELDS TDS GP RAM size: 1MB • Margin: 48%

  25. Metrics:CPU Usage • On STEREO/WAVES: (SPARCv7 at 9.8304MHz) • Everything but TDS event processing took 7%. • TDS event processing required copying event data from FIFOs to TDS event RAM, then calculating a quality metric – the copying dominated CPU usage. • The CPU couldn’t keep up with the maximum theoretical TDS hardware event rate, so all spare CPU cycles were devoted to event processing to maximize effective TDS event rate. • On SPP/FIELDS: (SPARCv8 at 9.6MHz) • We expect a similar CPU load for administrative work. (7%) • TDS event processing requires only calculating the quality metric. • Using a nominal TDS event size of 64k multichannel samples and a sample rate of 1.92MHz, the time between TDS hardware events is 34ms. • Experience shows that we can calculate a decent quality metric using simple arithmetic on ~1000 samples. • 10 cycles/sample * 1000 samples * 1/(9.6MHz) = ~1ms (~3% of CPU) • Therefore, we estimate ~10% CPU usage to meet our requirements. • Margin: 90%

  26. Development Environment:Tools and Techniques • Highly-orthogonal C code modules • Minimal SPARC assembly language • GCC cross-compiler suite • Subversion (SVN) repository • Heritage code from STEREO/WAVES FSW • Shared code with FIELDS DCB FSW • Mac OS X (unix) laptops & workstations

  27. Development Environment:GSE • PC laptop, running GSEOS • Spacecraft Emulator • DCB Emulator (FIG) • MAG Emulator (FIG) • SWEAP Emulator (FIG)

  28. Development Environment:Long-Term Preservation • After CDR, we will freeze version of GCC cross-compiler suite. • After launch, we will build a dedicated long-term maintenance development environment consisting of the following: • Dedicated Mac OS X workstation with frozen OS and frozen GCC cross-compiler suite. • Dedicated PC laptop, running GSEOS • Dedicated ETU of TDS hardware • Dedicated Spacecraft Emulator • Dedicated DCB Emulator (FIG) • Dedicated MAG Emulator (FIG) • Dedicated SWEAP Emulator (FIG)

  29. Testing and Verification:Unit Testing • We will develop scripted unit test suites in tandem with FSW module development. • Scripted unit test suites for each module will include tests versus all level 5 verification matrix requirements relevant to that module. • Modules will be tested in-situ: • Scripted unit tests will quiesce as much of the system as possible and focus on exercising the module under test as much as possible while limiting interaction effects. • Ongoing results of unit tests versus the level 5 verification matrix will be included in monthly FSW metrics reports.

  30. Testing and Verification:Integration Testing • When the bulk of the FSW has been roughed out, we will begin development of scripted integration test suites. • Scripted integration test suites will eventually include tests versus all level 5 verification matrix requirements. • Scripted integration tests will exercise as much of the system as possible, maximizing interaction effects. • After we have begun integration testing, ongoing results of integration tests versus the level 5 verification matrix will be included in monthly FSW metrics reports.

  31. Testing and Verification:Acceptance Testing • When active development on the FSW begins to slow down, we will perform long-duration testing using the previously developed scripted test suites. • We will also perform stress testing, developing additional test scripts as needed. • When the FM is ready for delivery to UCB and the long-duration testing shows that all requirements on the level 5 verification matrix are satisfied, we will snapshot the FSW as a “Gold Master Candidate” • Before delivery to UCB, we will run the full scripted test suite on the FM loaded with the Gold Master Candidate. • When this configuration satisfies all of the requirements on the level 5 verification matrix and the TDS Instrument Lead, TDS FSW Lead, and TDS QA sign off on the Gold Master Candidate, we will rename the Gold Master Candidate snapshot the “Gold Master”

  32. Maintenance:Before CDR • Before CDR, we will informally manage FSW maintenance. • The TDS FSW Lead will enter relevant problem/change-related notes in their development log. • Maintenance work is performed in the standard development environment.

  33. Maintenance:Between CDR and Launch • Between CDR and launch, we will formally manage FSW maintenance. • FIELDS/TDS team members will generate Software Problem Reports (SPRs) and/or Software Change Requests (SCRs) as necessary. • SPRs and SCRs will be evaluated by the TDS Instrument Lead, TDS FSW Lead, and TDS QA. • SPRs and SCRs will be tracked in Excel spreadsheets managed by the TDS Instrument Lead. • SPRs and SCRs will be brought to formal closure with the concurrence of the FIELDS System Engineer. • Maintenance work is performed in the standard development environment.

  34. Maintenance:After Launch • After launch, we will formally manage FSW maintenance. • FIELDS/TDS team members will generate Software Problem Reports (SPRs) and/or Software Change Requests (SCRs) as necessary. • SPRs and SCRs will be evaluated by the TDS Instrument Lead, TDS FSW Lead, and TDS QA. • SPRs and SCRs will be tracked in Excel spreadsheets managed by the TDS Instrument Lead. • SPRs and SCRs will be brought to formal closure with the concurrence of the FIELDS System Engineer. • Maintenance work is performed in the dedicated long-term maintenance development environment. • Project-level CCB concurrence will be obtained before flight use.

  35. FSW Peer Review RFAs/Recommendations25 October 2013 DCB/TDS – shared RFA TDS – recommendations

  36. Peer Review Follow-up:RFA(s) • Decide on which software modules will be shared between FIELDS1 and FIELDS2. Add to PDR presentation slides something to explain this code sharing, who is responsible for which part of the code, and how the sharing will occur. • We have started a list of potentially sharable modules. • We have informally defined a “spectrum of shareability” • Drop-in library functions, e.g data compression functions • Copy-and-modify modules, e.g. mag interface software • Reference modules, e.g. spacecraft interface software • Instrument Leads and Flight Software Leads will evolve the list as design and implementation efforts continue. • At the FIELDS peer review, we agreed to do an early trial of code sharing with the data compression routines. • UCB will maintain primary ownership of shared modules, as they have more heritage in the area of the shared modules. • UMN will propagate UCB changes into local copies to avoid CM issues related to unexpected changes of underlying code.

  37. Peer Review Follow-up:Suggestion(s) • Boot mode. Consider remaining in boot mode (rather than waiting a while and automatically starting operational code) if the Startup Mode bit on the spacecraft interface says ‘safe mode’ (then promote by command). • The TDS FSW will honor the Startup Mode bit, remaining in boot/safe mode if this bit is 0. • Determine criteria for switching clocks (forward and back) and how much software is involved. • We have developed a preliminary procedure for switching between the real-time clocks synchronized to the spacecraft and the DCB at failover state transition times as described in the Failover Management and Timekeeping slides of this presentation.

  38. Conclusion • We have no outstanding issues. • Preliminary design of TDS FSW meets requirements • TDS FSW is ready to move into ETU development.

  39. Backup Slides

  40. FIELDS Flight Software Level 4 Requirements, Part 1

  41. FIELDS Flight Software Level 4 Requirements, Part 2

  42. TDSFlight SoftwareLevel 4 Requirements

  43. System Manager SubsystemLevel 5 Requirements

  44. Spacecraft Interface SubsystemLevel 5Requirements

  45. DCB Interface SubsystemLevel 5 Requirements

  46. MAGi Interface SubsystemLevel 5 Requirements

  47. SWEAP Interface SubsystemLevel 5 Requirements

  48. AEB2 Interface SubsystemLevel 5 Requirements

  49. LNPS2 Interface SubsystemLevel 5 Requirements

  50. TDS Acquisition SubsystemLevel 5 Requirements

More Related