1 / 37

Nikolai Tillmann Foundations of Software Engineering Microsoft Research Redmond WA, USA

Pex (“ P rogram Ex ploration”) A framework for systematic runtime verification of .Net programs. Nikolai Tillmann Foundations of Software Engineering Microsoft Research Redmond WA, USA. A new name. Pex supersedes Parameterized Unit Testing [FSE’05] UnitMeister [FSE’05, ASE’06]

keelty
Télécharger la présentation

Nikolai Tillmann Foundations of Software Engineering Microsoft Research Redmond WA, USA

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Pex(“Program Exploration”) A framework for systematic runtime verification of .Net programs Nikolai Tillmann Foundations of Software Engineering Microsoft Research Redmond WA, USA

  2. A new name Pex supersedes Parameterized Unit Testing [FSE’05] UnitMeister [FSE’05, ASE’06] AxiomMeister [ICFEM’06] Dagstuhl Workshop Runtime Verification 2007 -- Pex

  3. The Pex Family Pex/AI Specification Inference i.e. inferring likely axiomatic specifications Pex Test Program and Input Generation for an API Increasing complexity Pex Parameterized Unit Testing Test Input Generation for a parameterized test DynaCop Dynamic Property Checking call patterns, allowable data flow, … DynaCover Coverage Tracking ExtendedReflection Managed .Net Monitoring API built on instrumenting profiler Dagstuhl Workshop Runtime Verification 2007 -- Pex

  4. The Pex Family Pex/AI Specification Inference i.e. inferring likely axiomatic specifications Pex Test Program and Input Generation for an API Increasing complexity Pex Parameterized Unit Testing Test Input Generation for a parameterized test DynaCop Dynamic Property Checking call patterns, allowable data flow, … DynaCover Coverage Tracking ExtendedReflection Managed .Net Monitoring API built on instrumenting profiler Dagstuhl Workshop Runtime Verification 2007 -- Pex

  5. Test Input GenerationProblem Definition Given a program P with statements S, compute inputs I, such that foralls∈S, exists i∈I: P(i) executes s Remarks: Since often some statements are not reachable,and reachability is not decidable, we aim for a good approximation, e.g. high coverage of the statements. ‘error’ is considered as a special statement Program is single-threaded Dagstuhl Workshop Runtime Verification 2007 -- Pex

  6. Test Input Generationfor Parameterized Unit Tests [FSE’05] In practice, the program for which we generate inputsis a “Parameterized Unit Test”. [TestMethod] voidAddParameterizedTest(ArrayList a, object o) { Assume.IsTrue(a != null); intlen = a.Count; a.Add(o); Assert.IsTrue(a[len] == o); } Parameterized unit tests separate two concerns: • Specification of behavior • Selection of test inputs Dagstuhl Workshop Runtime Verification 2007 -- Pex

  7. Test Input GenerationCombining concrete execution and symbolic reasoning Algorithm: 0. set S:= ∅(intuitively, S is set of analyzed program inputs) choose program input i∉ S (stop if no such i can be found) output i execute P(i)and record path condition C (in particular, C(i) holds) set S := S ∪ C (viewing C as the set { i | C(i) } ) goto (1.) Remarks The algorithm will not terminate if the number of execution paths is infinite. In practice, we stop the algorithm after certain bounds are reached (time limit, desired coverage, number of iterations, …) The choices in (1.) determine the obtained coverage after a certain number of iterations This algorithm generalizes previous approaches (“DART”, “CUTE” / “concolic execution”). Dagstuhl Workshop Runtime Verification 2007 -- Pex

  8. Test Input GenerationExample Goal: Cover all statements of Find. class List { int head; List tail; } static bool Find(List xs, int x) { while (xs!=null) { if (xs.head == x) return true; xs = xs.tail; } return false; } … 1. Choose arbitrary value for x, choose null for xs x = -517; xs = null; 2. Avoid ‘xs== null‘ choose new list with arbitrary head x = -517 xs.head = 1012; xs.tail = null; 3. Avoid new condition as well  choose an existing list x = -517; xs.head = 1012; xs.tail = xs; Input (Parameter assignment) Path condition (Over formal parameters) xs == null xs!=null &&xs.head != x &&xs.tail == null Non-termination on cyclic list! program error found Dagstuhl Workshop Runtime Verification 2007 -- Pex

  9. Test Input GenerationConstraint solving Mixed concrete and symbolic execution solutions constraints Constraint solver Theory(CIL) Th(Maps) Th(Integers) Th(Floats) Th(Objects) Arrays Structs Int32 Int64 Objects Strings Object Types User-provided value factories Random values Mock-objects SAT Boolean Search Dagstuhl Workshop Runtime Verification 2007 -- Pex

  10. Challenge:Analysis of .Net programs Pex monitors all safe managed .NET code However, most .NET programs use unsafe/unmanaged code for legacy and performance reasons Pex can ignore unsafe and unmanaged parts Challenge:Analysis of real programs Calls to external world Unmanaged x86 code Combining concrete executionandsymbolic reasoning (e.g. DART/CUTE/Pex) Unsafe managed .NET code (with pointers) Safe managed .NET code Program model checker (e.g. JPF/XRT) Dagstuhl Workshop Runtime Verification 2007 -- Pex

  11. The Pex Family Pex/AI Specification Inference i.e. inferring likely axiomatic specifications Pex Test Program and Input Generation for an API Increasing complexity Pex Parameterized Unit Testing Test Input Generation for a parameterized test DynaCop Dynamic Property Checking call patterns, allowable data flow, … DynaCover Coverage Tracking ExtendedReflection Managed .Net Monitoring API built on instrumenting profiler Dagstuhl Workshop Runtime Verification 2007 -- Pex

  12. Managed .Net Monitoring APIOverview User application, Managed Safe callbacks Pex Analysis, Managed Insert safe callback after each MSIL instruction .Net runtime,Unmanaged Unsafe C++ callbacks COR_PROFILER, Unmanaged:Rewrites every managed user method about to be JITed Dagstuhl Workshop Runtime Verification 2007 -- Pex

  13. Managed .Net Monitoring APICode instrumentation ldtoken Point::X call __Monitor::LDFLD_REFERENCE ldfld Point::X call __Monitor::AtDereferenceFallthrough br L2 L1: call __Monitor::AtBranchTarget call __Monitor::LDC_I4_M1 ldc.i4.m1 L2: call __Monitor::RET stloc.0 leave L4 } catch NullReferenceException { ‘ call __Monitor::AtNullReferenceException rethrow } L4: leave L5 } finally { call __Monitor::LeaveMethod endfinally } L5: ldloc.0 ret class Point { int x; int y; public static int GetX(Point p) { if (p != null) return p.X; else return -1; } } ldtoken Point::GetX call __Monitor::EnterMethod brfalse L0 ldarg.0 call __Monitor::NextArgument<Point> L0: .try { .try { call __Monitor::LDARG_0 ldarg.0 call __Monitor::LDNULL ldnull call __Monitor::CEQ ceq call __Monitor::BRTRUE brtrue L1 call __Monitor::BranchFallthrough call __Monitor::LDARG_0 ldarg.0 … Prologue Record concrete values to have all information when this method is calledwith no proper context (The real C# compiler output is actually more complicated.) Calls to build path condition Calls will performsymbolic computation Epilogue Calls to build path condition Dagstuhl Workshop Runtime Verification 2007 -- Pex

  14. DemoAppendFormat

  15. ExampleTesting .Net programs with interfaces [ASE‘06] AppendFormat(null, “{0} {1}!”, “Hello”, “Dagstuhl”);  “Hello Dagstuhl!” BCL Implementation public StringBuilder AppendFormat( IFormatProvider provider, char[] chars, params object[] args){ if (chars == null || args == null) throw new ArgumentNullException(…); int pos = 0; int len = chars.Length; char ch = '\x0'; ICustomFormatter cf = null; if (provider != null) cf = (ICustomFormatter)provider.GetFormat(typeof(ICustomFormatter)); … Dagstuhl Workshop Runtime Verification 2007 -- Pex

  16. Demo

  17. When run…

  18. …reproduce the error

  19. The Pex Family Pex/AI Specification Inference i.e. inferring likely axiomatic specifications Pex Test Program and Input Generation for an API Increasing complexity Pex Parameterized Unit Testing Test Input Generation for a parameterized test DynaCop Dynamic Property Checking call patterns, allowable data flow, … DynaCover Coverage Tracking ExtendedReflection Managed .Net Monitoring API built on instrumenting profiler Dagstuhl Workshop Runtime Verification 2007 -- Pex

  20. Test Program and Input GenerationProblem Statement • Test Input Generation • Given a program P with statements P, compute inputs I, such that foralls∈ S, exists i∈I: P(i) executes s • Test Program and Input Generation • Given methods M1, …, Mn with statements S, compute test programs P1, …, Pm and inputs I1, …, Im, such that foralls∈ S, exists 1 ≤ k ≤ m and i∈Iksuch thatPk(i) executes s Dagstuhl Workshop Runtime Verification 2007 -- Pex

  21. Test Program and Input GenerationWhat are good Test Programs? A statement s in a method m might be guarded by conditions over the object state and argument values List l = new List(); object o = new object(); l.Add(o); object p = l[l.Count-1]; We can drive objects into different states by calling • constructors • methods, if they • modify ‘this’ or any other formal parameter • return a new object Dagstuhl Workshop Runtime Verification 2007 -- Pex

  22. Test Program and Input GenerationPlans – Building Stones of Test Programs A plan is a parameterized unit test with results. • Its parameters have primitive types. • It may call another plan, a constructor, method, read or write a field. • Its result may be a value, a new objects, or a mutated object. Pex builds plans incrementally as a DAG. • Its nodes are values and objects. • Its edges are method calls, field reads or writes. new new o • List l = new List(); object o = new object(); • Append(o); • object p = l[l.Count-1]; l .Add( ) l’ p [l’.Count-1] Dagstuhl Workshop Runtime Verification 2007 -- Pex

  23. Test Program and Input GenerationQuest for Relevant Plans Pex builds a set of plans in a feedback loop: • Some trivial initial plans are chosen • An existing plan is extended by a method call or field read/write • Extended plan is kept if symbolic execution of extension • Mutates state or • Yields new object Stop when no new plans are found or a timeout is reached. Feedback Loop Plans Plan Manager SymbolicExecution Feedback Tests Dagstuhl Workshop Runtime Verification 2007 -- Pex

  24. Test Program and Input GenerationHeuristics to Guide Plan Search During symbolic execution, we monitor • Field read/writes, • Method calls, • Statement coverage. During plan search, prefer plans • where readers appear after writers, • which invoke methods with coverage potential • Transitive, using monitored call graph • … Dagstuhl Workshop Runtime Verification 2007 -- Pex

  25. Test Program and Input GenerationEvaluation • Ongoing work • Between 30% and 85% branch coverage on all libraries studied so far • Found many errors • NullReferences, IndexOutOfRange, InvalidCasts, Non-termination issues • Multiple bugs found in shipped Microsoft code, including the BCL • Easy to combine with other dynamic checkers: found many resource leaks, incorrect exception handlings (by using fault injection), to be continued… Dagstuhl Workshop Runtime Verification 2007 -- Pex

  26. The Pex Family Pex/AI Specification Inference i.e. inferring likely axiomatic specifications Pex Test Program and Input Generation for an API Increasing complexity Pex Parameterized Unit Testing Test Input Generation for a parameterized test DynaCop Dynamic Property Checking call patterns, allowable data flow, … DynaCover Coverage Tracking ExtendedReflection Managed .Net Monitoring API built on instrumenting profiler Dagstuhl Workshop Runtime Verification 2007 -- Pex

  27. Likely Specification InferenceFixing the abstraction level • Goal: Obtaining an axiomatic specification of a libraryin terms of its public API • public methods of public classes in C# (or Java) • For example, a contract for a bounded set • with public methods Add, Contains, IsFull • using Spec# syntax: void Add(int x) requires x != 0 otherwiseArgumentException; requires !Contains(x) && !IsFull() otherwiseInvalidOperationException; ensures Contains(x); “Modifier” method Precondition Postcondition “Observer” methods Dagstuhl Workshop Runtime Verification 2007 -- Pex

  28. Likely Specification InferenceBasic Idea [IFCEM’06] • Obtain a representative set of execution paths of the modifier method • Symbolic execution within bounds yields paths • Each path has a • path condition: predicate over initial heap and method arguments • final state: function over initial heap and method arguments • For each path, find observer methods which characterize the path condition and final state • Generalize and summarize results Dagstuhl Workshop Runtime Verification 2007 -- Pex

  29. Likely Specification InferenceExample: Bounded set publicclass Set { int[] repr; publicSet(intmaxSize) { repr = newint[maxSize]; } publicvoid Add(int x) { if (x == 0) thrownewArgumentException(); int free = -1; for (inti = 0; i < repr.Length; i++) if (repr[i] == 0) free = i; // remember index elseif (repr[i] == x) thrownewInvalidOperationException(); // duplicate if (free != -1) repr[free] = x; // success elsethrownewInvalidOperationException(); // no free slot means we are full } publicbool Contains(int x) { if (x == 0) thrownewArgumentException(); for (inti = 0; i < repr.Length; i++) if (repr[i] == x) returntrue; returnfalse; } … Dagstuhl Workshop Runtime Verification 2007 -- Pex

  30. The conditions to S15, S24, S23 all imply that me.repr contains x Consequently, Symbolic execution finds that Contains(x)==trueunder these conditions. Thus, Contains(x)==true is a precondition for these paths Observer equation

  31. Likely Specification InferenceSummarized and Simplified Result • Path conditions are summarized with observer equations • Disjunction of path conditions is simplified Our prototype tool infers the previously shown Spec# contract automatically in a few seconds: void Add(int x) requires x != 0 otherwiseArgumentException; requires !Contains(x) && !IsFull() otherwiseInvalidOperationException; ensures Contains(x); Dagstuhl Workshop Runtime Verification 2007 -- Pex

  32. Likely Specification InferenceEvaluation Only early prototype exists Works well on data types Can infer pre/postconditions for .NetHashtable As seen in the example, the inferred pre/postconditions are usually as expressive and concise as human written contracts Dagstuhl Workshop Runtime Verification 2007 -- Pex

  33. Likely Specification Inference Dagstuhl Workshop Runtime Verification 2007 -- Pex

  34. Status, Future Work, Related Work

  35. Status • Applied test generation on large code bases,including .NET Base Class Library (BCL) • Multiple bugs found in shipped Microsoft code, including the BCL • Currently uses custom finite domain solver;Supports all .Net types and heap structures within bounds • Pex is a Microsoft internal test tool today;we intend to make Pex available for academic use Future work: • Further improve scalability, e.g. with summaries • Better search strategies • Continue work on specification inference Dagstuhl Workshop Runtime Verification 2007 -- Pex

  36. Related Work Recent approaches to test case generation based on symbolic execution: • Combining concrete execution and symbolic reasoning • DART [Godefroid, Klarlund, Sen] • CUTE/jCUTE [Sen, Agha] • Software model-checkers with symbolic state representations • JPF with extensions [Sen, Pasareanu, Visser] • Bogor/Kiasan [Deng, Lee, Robby] • XRT [Grieskamp, Tillmann] Dagstuhl Workshop Runtime Verification 2007 -- Pex

  37. Thank you. http://research.microsoft.com/projects/mutt/ nikolait@microsoft.com

More Related