1 / 23

Observing Process: Astronomer's Integrated Desktop & Scheduling Block Based Observing

Observing Process: Astronomer's Integrated Desktop & Scheduling Block Based Observing GBT e2e Software Review May 3, 2005 Amy Shelton ( ashelton@nrao.edu ) Nicole Radziwill ( nradziwi@nrao.edu ). Ease of Use Project Description from August 2004. SB Executor. Quick Look Data Display.

Télécharger la présentation

Observing Process: Astronomer's Integrated Desktop & Scheduling Block Based Observing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Observing Process: Astronomer's Integrated Desktop & Scheduling Block Based Observing GBT e2e Software Review May 3, 2005 Amy Shelton (ashelton@nrao.edu) Nicole Radziwill (nradziwi@nrao.edu)

  2. Ease of Use Project Description from August 2004 SB Executor Quick Look Data Display GBT OT Status Screen Configuration API uses Observing API BUILD Balancing API EDIT Console Windows RUN MONITOR Easy Access to Documentation, Help Data Capture Astronomer’s Integrated Desktop (Astrid) Scheduling Blocks & Execution Log File Annotated Index of Observation Observation Management Database (*e2e = Project Database) * Note that APIs are very telescope-specific, yet are used to abstract the observation into terms not specific to the telescope

  3. GBT Observing Process Why do we want an Integrated Desktop? • Astronomers don’t have to learn multiple applications to accomplish the observing task • Remote observers only launch one application, manages on-screen real estate well • Error reports do not require knowledge of what application error was reported in Why do we want to transition to a Scheduling Block based system? • Observing Motivations • Encourage up front preparation – maximize throughput of science per observing session • Enable dynamic scheduling • Facilitate remote observing • Balance interactivity needs with need to optimize high frequency weather • Simplify routine operations such as regression tests and “all sky pointing” • Characterize observations better so that pipeline processing can be enabled in the future • Enable observation data mining – we can track what was observed more effectively • Facilitate proposal review process – we can use data from past proposals to review current • Technical Motivations • Enable more efficient troubleshooting – we can track what people did, and what errors occurred, much more easily • Code is written to accommodate a discrete number of well-defined levels of abstraction • Distinct categories of usage (astronomers use apps and application components, experts use HLAPIs, programmers use LLAPIs) How do we move to a Scheduling Block based system? • Standard Observing Modes, Scheduling Blocks, Observation archiving

  4. Observation Process: Preparation & Execution * Currently, Observation Process focuses on single SB execution. The Science Program level is currently unimplemented. Adopted on the GBT as defined by ALMAbut slightly customized for the GBT (e.g. details of ObsProcedure). * NRAO e2e terminology used throughout presentation.

  5. Observing Process: Data Model • Ties observation to proposal • Allows data mining on GBT observations • Schema relatively unchanged from last year • ObservedIndex unimplemented until system ready for regular use by observers • ObsTarget unimplemented until Source Catalogs in place • Security added Customizedfor usewith GBT

  6. High-Level Architecture GUIs Application Application Component 0..n uses Expert User uses HLAPIs Programmer uses LLAPIs Control System Observed Data (in an EDF) Real-Time Data (streaming)

  7. Astronomer’s Integrated Desktop (Astrid) • What is Astrid? • Astrid is a container from which you use various GBT applications, e.g. Observation Management and Data Display. • Multiple application instances are supported, e.g. two Data Displays. • Why is Astrid so important? • Simplifies observation startup. Users simply type “astrid” at any Linux prompt. • Reduces the number of applications that observers have to launch to begin observing to ONE; most useful for observing remotely! • Observers do not have to know the difference between those applications to report issues.

  8. Astrid Application Component Tabs 0..n uses Expert User uses HLAPIs uses LLAPIs Control System Observed Data (in an EDF) Real-Time Data (streaming) Astrid Architecture Application Component Tabs – Allows the user to launch multiple applications within a single container. They provide a GUI interface to the HLAPIs for all users. HLAPIs are always available to the expert user to bridge the gap between non-interactive SB based observing and the desired for interactive observing. Useful for special purpose tasks (balancing mid-scan) and commissioning.

  9. Astrid Screen Shot Drop-Down Menus Tool Bar Application Components Python Command Console – Expert User access to HLAPIs, e.g. Balancing and Configuration Application Component Log Window

  10. Available Astrid Application Components Astrid – Launches a separate Astrid session DEAP – Data Extraction & Analysis Program provides general data display and manipulation capabilities. Python Editor – A text editor that provides Python syntax highlighting. Text Editor – A text editor for general use. Data Display – Provides real time display of data plus offline data perusal. Logview – Used to examine engineering FITS files. Observation Management – Edit/Submit/Run Scheduling Blocks on the telescope. GBTStatus (In development) – Telescope status information.

  11. Astrid User Help

  12. Observing with Astrid • Preparation: • Write your Scheduling Block(s). • Observation Management provides an editor environment with syntax highlighting • Use your favorite editor (Observation Management provides an import utility) • Validate your Scheduling Block • This is automatically done when importing Scheduling Blocks into the Observation Management editor and when saving Scheduling Blocks to the database • Upload your Scheduling Block to the Observation Management database • Observing: • Retrieve your saved Scheduling Block from the Observation Management database • Submit the Scheduling Block to the Job Queue • Promote your Scheduling Block from the Job Queue to the Run Queue • Monitor Progress • Observation Management Monitor tab • Monitor telescope status with GBTStatus application component

  13. Observing with Astrid * In the unified environment of Astrid, you can edit SB, submit SB, monitor SB progress, check GBT status, and view Continuum data with the Data Display.

  14. Observing with Astrid * In the unified environment of Astrid, you can edit SB, submit SB, monitor SB progress, check GBT status, and view Continuum data with the Data Display.

  15. Scan Types & Observing Directives Observing API Scan Types Slew Track OffOn OffOnSameHA OnOff OnOffSameHA Tip DecLatMap DecLatMapWithReference PointMap PointMapWithReference RALongMap AutoFocus AutoPeak AutoPeakFocus Focus Nod Peak Observing Directives Break, execfile, DefineScan, SetSourceVelocity, SetValues, GetValue Python style comments, Annotation, Comment, Configure, Balance, * Documentation available on the GB Wiki at Observing.ScanTypes & Software.ObservingDirectives

  16. AutoFocus, AutoPeak & AutoPeakFocus • Syntax (note all parameters are optional): • AutoFocus(source, frequency, flux, radius) • AutoPeak(source, frequency, flux, radius) • AutoPeakFocus(source, frequency, flux, radius) • How does it work? • Finds current beam location and receiver • Configures using standard continuum configuration • Selects nearby calibrator (uses measured sky system temperature for Peaks) • Balances • Sets source name (comes from Jim Condon’s catalog & uses J2000 syntax) • Peaks/Focuses • Parameter Info: • source is a string and specifies the name of a particular source in the pointing catalog to be used for calibration. The default is None. • frequency is a float and specifies the observing frequency in MHz. The default is the rest frequency used by the standard continuum configuration cases. • flux is a float and specifies the minimum acceptable calibration flux in Jy at the observing frequency. The default is 20 times the continuum point-source sensitivity. • radius is a float. The routine selects the closest calibrator within the radius (in degrees) having the minimum acceptable flux. The default is 10 degrees.

  17. Lessons Learned: Deploying Software on Working Telescope • Technology Transfer • Astrid deployment delayed because technology transfer between Software Development staff and Observing Support staff is taking much longer than originally planned for • Staged Deployment • Program Layer not yet implemented • Observers expectations already influenced by previous observing experiences at the GBT • Abuse at SB layer • Long SBs • “for” loops • Training & documentation provided to encourage best practices

  18. Lessons Learned: Deploying Software on Working Telescope • Up-front preparations • Major paradigm shift for GBT observers • Observing strategies can be developed minutes before observing, or even on the fly, but planning minimizes user error and lost telescope time • Constructing SBs requires forethought • Authorized projects must be entered into database ahead of time • Observer name must be entered into database ahead of time, and associated with their projects • SBs need to be validated for observational integrity • Ultimately will make most efficient use of telescope time • Dedicated Forum for Software Development Response • Captures valuable user feedback • Visible indicator to Support Staff of Software Development’s response to user-initiated suggestions/issues.

  19. Lessons Learned: Aligning Development with Organizational Goals • Remote observing needs require simplified application startup • Make most efficient use of bandwidth • Reduce number of applications needed to observe • One of the major issues driving the development of Astrid • Early tests using VNC very promising • Importance of offline validation a priori • To reduce lost telescope time, SB must be created before scheduled observation. • Validation currently catches syntactic errors ahead of time • Plans to expand Validator to further reduce time lost to non-syntactic user error, e.g. passing an illegal file to Configure • Interactive observing is still a requirement • Support commissioning activities, e.g. new receivers • Support pulsar observations, e.g. balancing outside a SB • Astrid’s Python console provides access to HLAPIs that supplement observing intent captured in SBs. Provides on the fly access to the telescope.

  20. Lessons Learned: Additional Requirements • Project Data Directory • Missing mechanism for specifying project data directory • One project, many sessions • Sessions kept in separate directories • Sessions may have overlapping Subscan numbers • Need for Online/Online (monitor only)/Offline Modes • One user should have control of telescope at a time • One or more users would like to see what is going on, e.g. support staff, operators, collaborators • Operators have ultimate control • Users with upcoming observations need mechanism for validating SBs and submitting to database • Additional Control Mechanisms • E.g. device manager on/off (cope with legacy/visitor eqpt) • How much direct interaction with the control system should be enabled in the technology? • Usability Issues and Bugs • Accumulating user feedback, which will be prioritized and implemented as resource allocations permit

  21. Enabling “Full Functionality” in 2005 Described at http://wiki.gb.nrao.edu/bin/view/Software/ObservingToolsRequests • Institute Online/Offline Modes – C3 • Provides appropriate level of access to users at each stage of observing: Preparation, Execution, & Data Reduction • Improve Responses to Stops, Aborts & Control System Failures – C4 • Moving Sources & Source Catalogs – C5, C6 • Eliminate requirement for users to set source names and velocities. • Eliminate need for users to create own source catalogs unless desired. • Improved SB Submission & FIFO Queueing Model – C6, C7 • SB observing with Astrid requires much mouse clicking • SB Job/Run Queue modifications to support batch or interactive SB submission • Provide SB management capabilities to users • Enable Offline Validation of SBs using full-telescope software simulator bound to production control system software – C8

  22. Recap of Key Issues Remaining • Coverage of all GBT Standard Observing Modes (balancing, ephemerides, etc) • Reliability, incl. enabling smooth recovery from control system failures or user-generated aborts, under all circumstances • Enabling Science Program Management • In high demand! Users are finding creative ways to build SBs now which simulate the functionality they would be provided at the Science Program Level • Global variables • Management of long (many-hour) observing sessions • Existing (JAC) solution now / testbed for ALMA implementation; transition to ALMA OT & Scheduler when mature • Implementing a GUI to build Scheduling Blocks • Perceived by some as the single most important item to deliver “ease of use” • Functionality is critically tied to delivery of Science Program management • Java prototype built last year looks very similar to ALMA OT, but builds single SBs and does not manage Science Programs • Should we allocate time to do a gap analysis and complete this? Should we reverse engineer SBs to ALMA/JCMT GUI? Should we do this in 2006 or is it possible to get the functionality earlier? 5. Clean differentiation between Science and Telescope Domains (Technical Issue) • SBs are in Science Domain • Expert users require more advanced interaction with the control system • How to best abstract these needs?

  23. Conclusions • Because of cooperative work between SDD and scientists this year, we can expect complete transition of GBT observing for all standard observing modes to Scheduling Block based system by end of 2005 • Up front planning required by observers at that point will significantly reduce burden on scientific staff • Next year, can turn focus to usability issues, Science Program level. • Project came very close to collapse December/January (resources stretched too thin, system made available in 9/04 needed reliability improvements). Support staff agreed to adopt the SB vision for operational use in early 2005, and since have participated in improving the tools and ensuring readiness for release to visiting observers. This came with some growing pains but has ultimately been productive. • Early adoption of Scheduling Blocks without higher-level infrastructure already a huge payoff.

More Related