1 / 26

HLT – Interfaces (ECS, DCS, Offline) (Alice week – HLT workshop 06-03-2007)

HLT – Interfaces (ECS, DCS, Offline) (Alice week – HLT workshop 06-03-2007). S. Bablok (IFT, University of Bergen). TOC. HLT interfaces HLT  ECS HLT  DCS HLT  OFFLINE HLT calibration (Use Case) additional slides. HLT interfaces. ECS:

vandana
Télécharger la présentation

HLT – Interfaces (ECS, DCS, Offline) (Alice week – HLT workshop 06-03-2007)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. HLT – Interfaces (ECS, DCS, Offline)(Alice week – HLT workshop 06-03-2007) S. Bablok(IFT, University of Bergen)

  2. TOC • HLT interfaces • HLT  ECS • HLT  DCS • HLT  OFFLINE • HLT calibration (Use Case) additional slides

  3. HLT interfaces • ECS: • Controls the HLT via well defined states (SMI) • Provides general experiment settings (type of collision, run number, …) • DCS: • Provides HLT with current Detector parameters (voltages, temperature, …) Pendolino • Provides DCS processed data from HLT (TPC drift velocity, …) FED portal (Front-End-Device portal) • OFFLINE: • Interface to fetch data from the OCDB (OFFLINE  HLT) • Provides OFFLINE with calculated calibration data (HLT  OFFLINE)

  4. intern extern Detector responsibility Framework components Interface DCS Data flow in HLT Archive DB PVSS ECS DCS portal Pendolino OFFLINE DIM- Subscriber Pendolino-PubSub HLT OFFLINE Storage spec Datasink (Subscriber) PubSub Task Manager Shuttle DA MySQL DA ECS-proxy FES ALiEn AliRoot CDB access OCDB (Conditions) local Cache (HCDB) Taxi-HCDB Taxi

  5. HLT ECS interface • State transition commands from ECS • INITIALIZE, CONFIGURE(+PARAMS), ENAGE,START,… • Mapping to TaskManager states • CONFIGURE parameters: • HLT_MODE: the mode, in which the HLT shall run (A, B or C) • BEAM_TYPE: (pp (proton-proton) or AA (heavy ion)) • RUN_NUMBER: the run number for the current run • DATA_FORMAT_VERSION: the expected output data format version • HLT_TRIGGER_CODE: ID defining the current HLT Trigger classes • CTP_TRIGGER_CLASS: the trigger classes in the Central Trigger Processor • HLT_IN_DDL_LIST: list of DDL cables on which the HLT can expect event data in the coming run. The structure will look like the following: <CableName>:<DetetctorPart>,<CableName>:<DetetctorPart>,... • HLT_OUT_DDL_LIST: list of DDLs, on which the HLT can send data to DAQ

  6. implicit transtion implicit transtion OFF INITIALIZE slaves_dead / off start_slaves INITIALIZING<< intermediate state>> INITIALIZING<< intermediate state>> DEINITIALIZING<< intermediate state>> DEINITIALIZING<< intermediate state>> implicit transtion implicit transtion INITIALIZED processes_ dead CONFIGURE + params start + params SHUTDOWN kill_slaves CONFIGURING<< intermediate state>> CONFIGURING<< intermediate state>> RESET stop implicit transtion implicit transtion CONFIGURED local_ready ENGANGE connect implicit transtion implicit transtion DISENGAGING<< intermediate state>> ENGAGING<< intermediate state>> DISENGAGING<< intermediate state>> ENGAGING<< intermediate state>> DISENGAGE disconnect READY ready implicit transtion implicit transtion implicit transtion implicit transtion start_run START COMPLETING COMPLETING RUNNING running/busy STOP stop_run

  7. HLT DCS interface • FED portal: • Dim Channels (Services published on the HLT side) • Implements partially the Fed Api • Subscriber component • PVSS panels on DCS side integrate data in DCS system • Pendolino: • Contacts the DCS Amanda server • Fetches current running conditions • Publisher component • Three Pendolinos, each with a different frequency (fast, medium, slow)

  8. DCS portal (Dim Subscriber) Interface (PubSub –FED API [DIM]) FED API Pendolino- PubSub (Data processor) • Interface • (PubSub • Pendolino • [AliRoot]) Detector responsibility Framework components Interface HLT components to interface DCS DCS Archive DB PVSS Pendolino

  9. HLT  DCS dataflow • Purpose: • Storing of DCS related data in the DCS Archive DB (HLT-cluster monitoring [HLT]; TPC drift velocity, …. [detector specific]) • HLT side: • One node providing a special PubSub framework component implementing (partly) the FED API (DCS portal node) • DCS side: • Different PVSS panels: • A dedicated panel for HLT cluster monitoring • Integration of detector specific processed data in the PVSS panels of the according detector

  10. HLT  DCS dataflow • Purpose: • Storing of DCS related data in the DCS Archive DB (HLT-cluster monitoring [HLT]; TPC drift velocity, …. [detector specific]) • HLT Cluster has a dedicated DCS portal node • DCS portal acts as DIM server • DIM channels to detectors PVSS panels and HLT PVSS panels (DIM client) • implements a FedServer (DIM server) [partly] • "ConfigureFeeCom" command channel (setting log level) • Service channels (Single and Grouped service channels, Message channel, ACK channel) • all located in one DIM-DNS domain • 2 DCS PCs for HLT PVSS panels • worker node: PVSS panels to receive the monitored cluster data • this node will also connect to Pendolino(vice versa connection, see below) • operator node: PVSS to watch the data (counting room and outer world) • HLT cluster intern data is transported via PubSub system

  11. HLT  DCS dataflow (Hardware + Software components) Common Detector – DCS integration (over PVSS) Ordinary detector DCS nodes, connecting to the HLT portal in addition to their normal tasks. DCS Services of one detector are offered in Single and/or Grouped Service channel and can be requested by the PVSS of the according detector via DIM. TPC . . . TRD HLT HLT OnlineCalib. FEDClient (PVSS) HLT - cluster OnlineCalib. DIM-DNS domain FEDServer (Dim) Cluster monitoring HLT-DCSportal . . . • 2 HLT- DCS Nodes • (located in DCS counting room): • - worker node: PVSS panels to receive the monitored data; • operator node: PVSS panels to watch the data • (remotely: counting room and outer world) Pub-Sub connections The connections inside the cluster are based on the Pub-Sub framework.

  12. DCS  HLT dataflow HLT needs DCS data (temperature, current configuration, …): • Online analysis • Calibration data processing The required data can be acquired from the DCS Archive DB: • retrieval viaAMANDA Pendolino • request data in regular time intervals • about three Pendolinos with different frequencies are foreseen(three different frequencies – requesting different type of data) • HLT intern data is distributed via PubSub framework

  13. Pendolino Request Data response DCS  HLT dataflow(Hardware + Software components) DCS Archive HLT-wn HLT HLT - cluster Request of data via AMANDA (PVSS DataManager) AMANDA – Pendolinos HLT-DCSportal . . . worker node (wn), where AMANDA server for HLT is running Pub-Sub connections The connections inside the cluster are based on the Pub-Sub framework.

  14. DCS  HLT dataflow Pendolino details: • Three different frequencies: • fast Pendolino: 10 sec - 1 min • normal Pendolino: 1 min - 5 min • slow Pendolino: over 5 min • Response time: • ~13000 values per second • e.g. If Pendolino needs data for N channels, for a period of X seconds and the channels change at a rate of Y Hz (with Y smaller than 1 Hz !), it will take: (N*X*Y) / 13000 seconds to read back the data. (given by Peter Chochula) • Remark: • The requested values can be up to 2 min old. (This is the time, that can pass until the data is shifted from the DCS PVSS to the DCS Archive DB)

  15. DCS data  HLT Remarks: • Amanda Pendolino can only be used to request data included in the DCS Archive DB • Requests of values with higher frequency than ~0.1 Hz need a different connection • requests only data from current run; older data can be requested from the OCDB (OFFLINE interface, see below) • Requests for huge amount of data should be requested via the FES of DCS(additional portal, will be similar to OFFLINE FES)

  16. HLT OFFLINE interface • Taxi portal • Requests OCDB and caches content locally (HCDB) • Provides calibration objects to Detector Algorithms (DA) inside the HLT • HCDB accessible via AliRoot CDB access classes • Shuttle portal • Provides calibration data to OFFLINE (OCDB) • Data exchanged via FileExchangeServer (FES,FXS) • Meta data stored in MySQL DB • Fetched by OFFLINE at end of the run

  17. HLT OFFLINE interface DA DA AliRoot CDB access classes ECS DA DA DA_HCDB AliRoot CDB access classes ECS-proxy • Taxi portal: triggers update current run number HCDB0 HCDB1 portal-taxi1 1. 1. Taxi0 portal-taxi0 Taxi1 OCDB

  18. HLT OFFLINE interface • How to access data from HCDB: string hcdbURL = “<local_path_to_hcdb_file_catalogue>”; string calibObj = “<name_of_calibration_object>”; Int_t runNumber = <current_run_number>; AliCDBManager *man = AliCDBManager::Instance(); AliCDBStorage *hcdb = man->GetStorage(hcdbURL.c_str()); hcdb->QueryCDB(runNumber); Int_t latestVersion = hcdb->GetLatestVersion( calibObj.c_str(), runNumber); AliCDBEntry *calibObject = hcdb->Get(calibObj.c_str(), runNumber, latestVersion); ... // and that’s it!! AliCDBEntry represents calibration objects

  19. HLT OFFLINE interface DA DA DA DA portal-shuttle1 (Subscriber) portal-shuttle0 (Subscriber) • Shuttle portal: FES1 MySQL1 FES MySQL Shuttle OFFLINE OCDB

  20. HLT OFFLINE interface • Shuttle portal:

  21. Information on the web http://wiki.kip.uni-heidelberg.de/ti/HLT/index.php/ECS-interface http://wiki.kip.uni-heidelberg.de/ti/HLT/index.php/Specification-HLT2OFFLINE-interface http://wiki.kip.uni-heidelberg.de/ti/HLT/index.php/UseCase-Calibration-HLT http://wiki.kip.uni-heidelberg.de/ti/HLT/index.php/Data_path_from_DCS_to_the_HLT http://wiki.kip.uni-heidelberg.de/ti/HLT/index.php/Integrating_HLT_data_in_the_DCS_system http://aliceinfo.cern.ch/Offline/Activities/ConditionDB.html

  22. Additional slides

  23. ECS HLT - Interfaces DCS Archive DB DCS values Control (run number, …) PVSS DAQ Pendolino HLT-proxy Processed events DCS-portal Processed data DDL HLTcluster FEP OFFLINE Calibration data OCDB (Conditions) Event data OFFLINE-Shuttle Taxiportal FEE Shuttle Processed calibration data

  24. HLT condition dataflow / Use Case • Framework for data exchange • Initial Settings (before Start-of-Run (SoR)) • ECS  HLT (over SMI: run number, beam type, mode, etc) • OFFLINE  HLT (run and experiment conditions from OCDB; local cache  HCDB) • During run (after SoR) • DCS  HLT (current environment/condition values via Amanda Pendolino) • HLT  DCS (processed data via DIM-PVSS; e.g. drift velocity) • Processed data back to DAQ (also for certain period after End-of-Run) • After run (after End-of-Run (EoR)) • HLT  OFFLINE (OFFLINE Shuttle requests data via MySQL DB and File Exchange Server (FES))

  25. Timing diagram Init EoR SoR ECS DAQ ...................................... DCS HLT OFFLINE Pre-Proc SHUTTLE

  26. HLT dataflow / Remarks • Goal: framework components shall be independent of data • definition can be changed later without change of model design & framework implementation • usage of already proven technologies (AMANDA, PVSS, DIM, AliRoot, PubSub framework) • Detectors / Detector algorithms can define the required data later on • BUT: they have to make sure, that their requested data is available in the connected systems (OCDB, DCS Archive, event data stream (from FEE)) • Limit their requests to the actual required amount of data (Performance)

More Related