1 / 13

Test suites for gLite CLI

CERN-INTAS SA3 meeting July 4, 2008 Dubna, Russia. Test suites for gLite CLI. Work report. Dmitry Zaborov (ITEP-Moscow), Protvino group. An efficient way of testing command line interface Test suite for Data Management CLI (lcg_utils) Test suite for gLite WMS CLI

cyrah
Télécharger la présentation

Test suites for gLite CLI

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CERN-INTAS SA3 meeting July 4, 2008 Dubna, Russia Test suites for gLite CLI Work report Dmitry Zaborov (ITEP-Moscow), Protvino group • An efficient way of testing command line interface • Test suite for Data Management CLI (lcg_utils) • Test suite for gLite WMS CLI • Update on UI test suite

  2. Outlook • UI test suite has been updated • Now matches gLite 3.1 • Two new test suites have been developed • Both requested by [CERN] gLite certification group • Both can be viewed as specialized extensions of the UI test suite • Both are aimed to provide a maximally full coverage of possible use cases and supplementary options of the CLI • Each covers an important and heavily used subset of gLite UI commands • The tests are technically implemented in bash • The test suites have common application domains • Middleware certification (currently main application) • Validation of new UI installations (can be recommended for afs UI) • Bug testing

  3. Efficient approach to CLI testing LCG DM and gLite WMS CLI represent sophisticated test targets • The number of options (n) and variants (m) of most commands is too large to test every imaginable option combination one-by-one as this would result in m * 2^n ~ O(1000) test cases, making the test suite practically unusable. • Fortunately, not every option combination is used equally often. Moreover, many options “run” completely independently and therefore do not need to be tested together. Thus one can drop most of the test cases from consideration and keep only the most used and representative ones, without substantial impact on the test suite coverage (=aggregation of detectable bugs and problems). Several ideas are introduced to make the effort feasible • “Focus on one thing”: Look only for UI failures. Service-related and resource-related failures should be tolerated whenever possible, i.e. when the failure is unlikely to be caused by a UI problem. • “Disregard command history”: The order in which the commands are run is disregarded. The test cases are grouped as it is convenient. We believe such an approach is well justified for our client-oriented test suites because client libraries do not have memory to remember a erroneous state. We should note however that Grid services do have memory by definition, and therefore command ordering should not be neglected in service-oriented test suites. • “Be simple”. No complex dependencies are assumed between different options. Only simple types of conflicts, such as “two options do not work together” are looked for.

  4. Data Management test suite • Package org.glite.testsuites.ctb/DM • Tests are organized in 9 bash scripts. The only mandatory argument is SE hostname (two SE hostnames for file replication test) • Example: DM-lcg-cr.sh lxb1921.cern.ch • The following commands are tested: • lcg-cr, lcg-cp, lcg-del, lcg-la, lcg-lr, lcg-lg, lcg-aa, lcg-ra, lcg-rep lcg-rf, lcg-uf, lcg-gt, lcg-sd, lcg-getturls, and lcg-ls • Most of lcg- commands accept the name of source and/or destination files in different formats (SURL, LFN, GUID, gsiftp:, file:). All possible variants of these are covered by the test suite • Example: DM-lcg-alias will test the following use cases of lcg-aa: lcg-la GUID, lcg-la SURL, lcg-la LFN, lcg-la ALIAS (second LFN) • By default all the commands are tested without any special options. Additional options are supported by the test scripts which activate corresponding options of the lcg- commands. Options can be combined. In this way all important options of lcg- commands can be tested • Example: ‘DM-lcg-ls.sh -v -t 20 --vo cms --nobdii --defaultsetype srmv2 lxb1921.cern.ch’ will test lcg-cr with all the most important options it accepts • For more detail read the documentation: https://twiki.cern.ch/twiki/bin/view/LCG/SAMTests#Data_Management_DM

  5. WMS CLI test suite • Package org.glite.testsuites.ctb/WMS/WMS-cli • Tests are organized in 16 bash scripts. No arguments or options should/need to be supplied • The following commands are tested: • glite-wms-job-delegate-proxy, glite-wms-job-list-match, glite-wms-job-submit, glite-wms-job-cancel, glite-wms-job-output, glite-wms-job-perusal and glite-wms-job-info, glite-wms-job-logging-info and glite-wms-job-status (last two are symbolic links to glite-job-logging-info and glite-job-status respectively) • All more or less important options are tested • Two different approaches are combined in the test suite to achieve efficient an structure: • “One option - one test“. This is used for the "basic cycle" tests, which include glite-wms-job-submit, glite-wms-job-status, and glite-wms-job-output calls. The choice is dictated by the fact that job output can only be retrieved once for one job, and therefore one needs a job-submit for each job-output. • “One command - one test". This approach is used for specialized and less used commands, such as glite-wms-job-list-match or glite-wms-job-perusal, and for certain options of glite-wms-job-submit. • For more detail see https://twiki.cern.ch/twiki/bin/view/LCG/SAMTests#WMS_CLI

  6. UI test suite updates • Updates to match gLite 3.1 • Remove obsolete commands (such as glite-job-submit) from the test list • Account for a change in the behavior of some commands (e.g. voms-proxy-init with -voms/confile/userconf options) and some Glue schema changes (certain attributes moved) • Various minor improvements and fixes • One new test • UI-myproxy-change-pass-phrase • Twiki page is kept up-to-date • https://twiki.cern.ch/twiki/bin/view/LCG/SAMTests#User_Interface_UI

  7. Practical results • The test suites are used for gLite middleware certification, thus helping to prevent new bugs in gLite • Several bugs found • Bug 35097: Segmentation fault in lcg-sd • Bug 34950: lcg-sd doesn't doesn't work with SRMv2 request token (already fixed) • Bug 38073: Options -g and -l of lcg-rf do not work together • Bug 35698: Options --from and --to of glite-job-logging-info have one hour offset (already fixed) • Bug 35706: Documentation refers to --config-vo option which is not supported by the glite-wms commands • Certain known bugs are detectable by the test suites (so can be monitored) • Bug 24963: glite-job-submit --valid hh:mm fails • Bug 34420: WMS Client: glite-wms-job-submit option --valid does not accept any time value • Bug 30989: glite-wms-job-status -all systematically fails • Bug 36573: voms-proxy-info has bad return codes • Bug 33459: voms-proxy-info on non-VOMS proxy gives a misleading error

  8. Conclusion • Two new test suites have been developed and cover two important sets of gLite commands • Data Management CLI (lcg_utils) • gLite WMS CLI (glite-wms-job- commands) • The UI test suite has been updated • The test suites implement the concept of efficient testing, which represents an attempt to maximize the coverage and utility of the test suite while minimizing its size and complexity • The test suites are used for certification of gLite patches and updates, and can also serve to verify UI installations (e.g. afs UI), search for new bugs, and other. • TWiki-based documentation exists • The author is still available to support these packages

  9. Efficient testing of Command Line Interface of Grid UID. Zaborov https://twiki.cern.ch/twiki/bin/view/LCG/SAMTests#User_Interface_UI https://twiki.cern.ch/twiki/bin/view/LCG/SAMTests#WMS_CLI https://twiki.cern.ch/twiki/bin/view/LCG/SAMTests#Data_Management_DM

  10. Five reasons to test Grid UI • UI is an essential component of the Grid • A Grid user may not need every Grid service, but (s)he can not do without a UI • A typical Grid UI is a rather sizeable program complex that may contain more program components than certain Grid servers • UI functions, in particular command-line interfaces (CLI), are offten used within software packages (e.g. LHC experiments software) • Grid users can install and run a UI on their machines, which may vary in OS version and configuration. UI is also offten installed in a shared disk area and then used from many workstations, which again may differ in configuration. The functionality of the UI distribution should be ensured in all cases whenever possible CLI A CLI X C++ API A Java API X … … … Tar UI in afs/shared disk UI PC UI … UI probably needs even more elaborate testing than some Grid services. Special effort should be made to detect configuration problems and software conflicts Service type A Service type B Service type X

  11. A generalized approach to CLI testing • Test target: a command line interface (CLI) with many commands and options • It may be impractical or impossible to test every combination of various options by hand or even with an automated script • Aim: maximize the probability to find an important bug/problem while minimizing the size and complexity of the test suite • It is impossible to find all bugs and problems with a single test suite, however the most critical problems must be detected as quickly as possible • Compact test suite is more easy to maintain and work with. Compact test report is more likely to be examined “by hand” • Strategy: choose a minimal subset of possible use cases, which would be sufficient to detect the most common types of problems and conflicts • The test cases should be organized into a series of test scripts in a reasonable manner (e.g. a “file replication test” would test lcg-rep command and all its options) Part I: test every command without any options The most typical use case! Another typical use case. Such problems as “option does not work” or “option does not work without other option” can be detected. Options which are offten used in combination can/should be tested together. Options which are very unlikely to interact with each other can also be tested in one go. Part II: test every option of the most used/important commands separately This allows to detect conflicts between options (like “two options do not work together”). Complex types of conflicts, like “two options work together only when a third one is not given”, may not be detected though (but they are less likely to occur and to be faced by a user) Part III: try all the options (or at least most of them) simulataneousely

  12. Specificity of Grid UI as a test target and ways to proceed • UI is basically a set of client tools and libraries • Need to test various CLI’s (the most used), as well as API’s (C++, java, python, …). A typical CLI makes use of the corresponding API. Thus API is indirectly tested by CLI tests. • Unlike for the Grid services, there is no need to run UI tests continuousely. However one may need to repeat testing after major updates or configuration changes • Every sensible function of Grid UI involves the use of Grid services • It is difficult to decouple client-side testing from server-side testing. But there are at least two reasons to test the client part separately: 1) usually client tools are distributed separately from server packages; 2) administration of UI machines is offten done independently from service administration • Way to go: be sensitive to errors which could be caused by client software malfunction, but tolerate all kinds of errors occurring for sure “on the other side” (like job failure on a Worker Node) • A great majority of problems with the UI are connected with configuration • Need special tests to detect configuration problems • Does command/library exists and can be found?  detect problems with installation/shell environment • Does command run at all (e.g. with --version or --help)?  detect problems with libraries, version conflicts UI test ignore CLI A Service A resource x API A … resource y CLI B

  13. Testing gLite UI • Check shell environment for non-existing directories in the PATH, etc. Different types of shell should be addressed (csh, bash) • Check that ntp daemon is running - proper clock synchronization is important for many types of client-service interaction • Check gLite shell variables • Check that every gLite command metioned in the User Guide can be found • Also check that documentation (man page) is supplied with every command • Check that the gLite libraries are in place • Try to run the commands with the simplest options (--version, --help, …) • Security subsystem tests (voms-proxy-init, etc) • Information system tests • WMS CLI test suite • Data Management CLI test suite • … might be useful for any other, even non-Grid, node! General tests Grid service Grid service Grid service Proved useful! UI gLite component Test suite gLite-specific client-only tests gLite component Test suite gLite component Test suite Shell environment test test OS gLite-specific tests involving Grid service usage

More Related