140 likes | 221 Vues
Richard Paige, Phil Brooke, and Jonathan Ostroff paige@cs.york.ac.uk , phil.brooke@plymouth.ac.uk , jonathan@cs.yorku.ca Department of Computer Science, University of York, UK. School of Computing, University of Plymouth, UK. Department of Computer Science, York University, Canada.
E N D
Richard Paige, Phil Brooke, and Jonathan Ostroff paige@cs.york.ac.uk, phil.brooke@plymouth.ac.uk, jonathan@cs.yorku.ca Department of Computer Science, University of York, UK. School of Computing, University of Plymouth, UK. Department of Computer Science, York University, Canada. Specification-Driven Development of an Executable Metamodel in Eiffel
Motivation • Test-driven development (TDD) due to Beck is increasingly popular for building systems with reliability and maintainability requirements. • Three steps: • Write a test (which will fail). • Write enough code to make the test pass. • Refactor the code to eliminate redundancies and design flaws. • TDD uses tests as specifications that drive the development process. • Two main limitations: code-based only; and tests have expressiveness limitations. • Despite these limitations, we claim that TDD could be useful for building metamodels, which are systems with substantial reliability and maintainability requirements. • But how do we deal with the above limitations?
Specification-Driven Design • A model-driven extension of TDD. • With this approach, models (with contracts) and tests can be used to drive the design process. • The rest of the presentation: • A very short overview of SDD. • An overview of its application in building an executable metamodel in Eiffel. • The key idea: in SDD for metamodeling, a test is an encoding of a model (in Eiffel). • Running the test automatically checks the model against the (partial, incomplete) metamodel. • So the development process also gives us a framework for fully automatic conformance checking of models against metamodels.
SDD • SDD is an integration of TDD and Meyer’s Design-by-Contract (DbC). • Start anywhere - writing tests, contracts, etc. • Emphasis is always on producing compilable and executable code. • Some tests (collaborative specifications) are scenarios.
Design-by-Contract • Annotate classes with properties, and methods of classes with pre/postconditions. • These properties are the best form of documentation: they execute with the code, and are guaranteed to be consistent with the code. • Example of a class in Eiffel class MATH feature square_root(x: DOUBLE): DOUBLE is require x>=0 do -- your algorithm goes here, e.g., Newton's method ensure (Result*Result - x).abs <= epsilon; epsilon = old epsilon end epsilon: DOUBLE -- accuracy invariant 0 < epsilon and epsilon <= 0.001 end -- MATH
Some Observations about SDD • Though one can start development with writing contracts, there are reasons to prefer writing tests first. • Closure: a unit test gives you a clear stopping point: write enough code to get it to work. Contracts may allow unnecessary design. • Collaborative specification-friendly: tests can formalise instances of collaborations more easily. Consider LIFO property of stacks -- difficult to specify using contracts, easy using tests. • In summary: • Contracts are good for fleshing out the design while making assumptions explicit. • Contracts spell out assumptions more completely, and often more concisely, than unit tests. • Tests are good for writing collaborative specifications, and as such are more likely to be used early in development.
SDD of a Metamodel in Eiffel • The metamodel is equivalent to a subset of UML, consisting of class and collaboration diagrams. • The class diagrams include an OCL-like contract language for pre/post/invariants. • From this, we automatically generated Eiffel class stubs. • Methods were extended with simple preconditions to ensure non-NULL arguments.
Writing Acceptance Tests • The SDD process continued by writing acceptance tests. • One test per metamodel well-formedness constraint. • Tests took the form of simple Eiffel programs that encoded models that either satisfied or invalidated constraints. • e.g., a test generating a model with an inheritance cycle, a test generating a model with clashes in a namespace. • Constraints were prioritised based on complexity and our opinion of how essential it was to capture immediately. • e.g., constraint ensuring valid method calls in assertions was postponed for three iterations (existing tools could handle it initially). • e.g., cycle-free property for aggregation was first iteration since it involved constraints on graphical syntax.
Example: Model Encoding Test class ACCEPTANCE_TEST inherit UNIT_TEST creation make feature {ANY} no_aggregation_cycles: BOOLEAN is local a, b, c: E_CLASS; c_to_a, a_to_b, b_to_c: AGGREGATION; m: MODEL do create a.make("A"); create b.make("B"); create c.make("C"); create c_to_a; create a_to_b; create b_to_c; create m.make; ... c_to_a.set_source(c); c_to_a.set_target(a); b_to_c.set_source(b); b_to_c.set_target(c); a_to_b.set_source(a); a_to_b.set_target(b); ... m.prepare Result := true end make is do make_test; add_violation_test( agent no_aggregation_cycles ); to_html("accept.html") end end
Encoding Constraints in Eiffel • We used Eiffel’s declarative specification technique - agents - to capture constraints. • This promotes understandability, readability, and direct mapping from OCL/OCL-like constraints to code. • Example: no_inheritance_cycles: BOOLEAN is do Result := closure.for_all((i1:INHERITANCE): BOOLEAN do Result := closure.for_all((i2: INHERITANCE): BOOLEAN do -- return true iff i1 and i2 do not form a cycle Result := not (i1.source=i2.target and i1.target=i2.source) end) end) end • Clauses like the above are added to class invariants, to be checked when unit tests are run. • Each constraint is implemented within the SDD process.
Gaps between Tests and Code • Occasionally the gap between the unit test (capturing a model) and the agent-based code needed was substantial. • e.g., checking that method calls were legitimate according to a class’s information hiding policy. • The simple case (single-dispatch) was straightforward to test and implement. • The fun case (multi-dispatch) was initially missed (a unit test failed). • Added additional agent code, which duplicated much of the original agent code for single dispatch. • Refactored this.
Some Observations • Useful approach. • We emphasised the TDD parts of SDD. • Contracts were predominantly for guarding routine calls (i.e., preconditions) and for capturing invariants. • Sometimes contracts got very complex, e.g., for checking covariant overriding. • In this case we wrote a unit test with a simple example of covariance overriding, and refactored the agent code. • This was much simpler because we could use sequencing to capture the well-formedness condition. • Generally, minimal class-level refactoring was done; we’d need this if we added new views. • Deliverable: a testing suite as application evidence for correctness of the metamodel.
Conclusions and Future Work • Fast, reliable way to build executable metamodels. • Useful to be able to switch between testing and modeling. • Side-effect: we can fully automatically check models for conformance to the metamodel. • Further work: • Improving support for encoding contracts within the metamodel. It’s not easy to use right now. • Adding a state chart view. • Additional metamodels, particularly MOF.
Conclusions • Contracts and tests both make useful forms of specification. • They are complementary and can be used synergistically. • Contracts are good at making design decisions explicit (but be sure to keep these in your code). • Tests can express collaborations more easily, and appear to be more useful early in the design process. • Contracts don’t reduce the amount of tests that need to be written. • It’s easy to underspecify contracts, or simply get them wrong. • Need tests to catch these errors.