110 likes | 248 Vues
This report outlines the operational verification processes at the Hydrometcenter of Russia, detailing the hardware and software used, including challenges faced during the installation of VERSUS and its patches. We address issues related to conditional verification and buf relative errors. Additionally, we discuss the establishment of confidence intervals (CIs) for model performance assessment, showcasing methodologies like bootstrapping in R. Our findings highlight the significance of model quality discrimination through geographical mappings, with gratitude expressed to supporting colleagues throughout the implementation process.
E N D
Operational verification in Russia Anastasia Bundel and Anatoly Muravyev Hydrometcenter of Russia
Hard- and Software • CPU: Intel(R) Xeon(R) CPU 2.33GHz • RAM: 16GB • Hard Disk: 30 GB • Linux 2.6.18, Fedora Core release 6 (Zod) • Web Server Apache/2.2.3 (Fedora) • MySQL 5.0.22 with INNODB storage backend • PHP 5.1.6 Other software from VERSUS_INSTALLATION
Problems in installing VERSUS 2 and PATCHES • Correct setting up of –fPIC options in BUFR library compilation on the 64-bit machine • Problem with Stratification for all Russian Stations – solved in PATCH_03 • Some new model features were not installed automatically in PATCH_03 (n_fcs_ensamble, id_model_feature). We added them running MySQL commands
Problems left: • Conditional verification – a message about registered_grid error • Errors in our bufrs. E.g., 6- and 12-h precipitation totals are assigned wrong times. Some other variables need to be checked.
Issues of operational functioning We plan to implement VERSUS on a computer with faster access. Questions of organization within the Hydrometcenter of Russia: • operational creation of bufrs; • preparing upper air obs, feedback files; • accumulation of bufr and grib files; scripts to put these files into VERSUS data directory. Could Verification in VERSUS be run using a script instead of using GUI to make the whole process automated?
Confidence Intervals • Confidence intervals (CI) for all scores and skill scores are highly important (MET experience) • R-scripts: bootstrapping codes written and run on test data • Graphics: Several R-graph tools for CI depiction in quality assessment plots tested • Expectations: Forecast verification and models’ quality discrimination in geographical mappings
Example of MSE bootstrapped CIs • MSE of St. Petersburg precipitation forecasts with the RHM semi-Lagrangian model (PL) and T169 spectral model (SM ) • CIs of 95% confidence level: blue for PL and red for SM Percentile method Adjusted percentile method initial dates = 2010Jul17-2010Aug13,12 UTC; lead times: 6, 12, … 120hs
Models’ quality discrimination in geographical mappings • Ratio of MSEs of two models: 2 times greater MSE = significantly lesser quality (analog of the Fisher test): RMSEs of two models:
We would like to express our gratitude to Adriano Raspanti Angela Celozzi Flora Gofa And Filodea Pastorelli who patiently helped us at every stage of VERSUS implementation, and without whom it would be impossible!