1 / 11

CS527 Topics in Software Engineering (Software Testing and Analysis)

CS527 Topics in Software Engineering (Software Testing and Analysis). Darko Marinov September 20, 2011. Schedule. First few lectures to help you select projects Shared memory: CHESS, IMUnit , CAPP Message passing: Setac Comment analysis: iComment Regression testing: survey paper

thanh
Télécharger la présentation

CS527 Topics in Software Engineering (Software Testing and Analysis)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS527Topics in Software Engineering(Software Testing and Analysis) Darko Marinov September 20, 2011

  2. Schedule • First few lectures to help you select projects • Shared memory: CHESS, IMUnit, CAPP • Message passing: Setac • Comment analysis: iComment • Regression testing: survey paper • Model-based testing • Today: test assertions • Thursday: mock-based testing • Project proposals due on Sep 27 (in a week!) • Your project can be something we don’t cover

  3. Next Week • Project proposals (due 9/27) • If we didn’t talk (email/Skype/in person), schedule a meeting with me • Create a Wiki page (low overhead but helps in keeping track) • Paper reports (due 9/29) • Choose one paper, ideally related to your project • Write a four-bullet report on that paper

  4. Paper Today • Augmenting Automatically Generated Unit-Test Suites with Regression Oracle CheckingTao Xie (ECOOP 2006) • Follows: one-slide overview, project ideas, questions for discussion

  5. Paper Overview • Problem • Automatically generated tests lack assertions • Solution • Orstra • Insert assertions based on observers • Evaluation • 11 small programs • Mutation testing for fault seeding • Increased fault-exposure ratio

  6. Potential Projects (1) • This idea could be combined with the regression testing techniques presented in the survey paper on 9/13 [KB] • how to identify which of the assertions generated are actually useful [FS] • there should be selection among assertions to prevent false alarms in the modified version of the program [SO] • generate new test inputs instead to consider only those existing [FS]

  7. Potential Projects (2) • Find a better way to deal with non-determinism that could be impacting the quality of the generated assertions [AY] • annotate the methods of your class with information that assists Orstra in determining whether a particular method's value should be asserted, and in what way [DwG] • Perform a more extensive evaluation on some large code base [DeG] • apply some of the techniques in this paper to manually written tests [JT]

  8. Potential Projects (3) • Integrate this tool into an IDE (like Eclipse) to give indications about when code changes are modifying the state of outside objects[SB] • We can also extend this to implement contracts in a program… add pre-conditions, post-conditions and invariants [AD] • evaluate the effectiveness of the various [oracle generation] tools on software systems of varying size [CM] • Creating an Orstra based test augmenter for other test frameworks in other languages[KN]

  9. Questions for Discussion (1) • Find test inputs that differentiate the behavior of a modified version respect to the original version [FS] • consider information other than the simple return value [FS] • Can Method-Sequence State Representation represent singleton? [JT] • there are no general rules to get expected output when providing an arbitrary input for a program? [YL] • how Orstacan work with JUnitEE [AD]

  10. Questions for Discussion (2) • generating assertions which test only the specification you want to test, and not the implementation details? [DwG] • Should they have used observer methods when generating for results that are not primitive and not the class under test? [DeG] • Integration with code contracts / annotations / aspect-oriented programming?[KB] • Couldn't they have used the existing test suite and have the developer choose states to generate assertions for? [SB]

  11. Questions for Discussion (3) • Was the need for random test generation really required? [SB] • How does the technique presented in the paper scale to much larger systems? [CM] • How to be sure that the execution during “capture phase” is correct? [SO] • Are there alternate ways to deal with non-determinism than just running the application multiple times? [AY] • What other types of cost, besides execution time, does adding Orstrato tests cost? [KN]

More Related