1 / 37

An Introduction to Partial Evaluation

An Introduction to Partial Evaluation. N. D. Jones Presented by: Changlin Sun. A Partial Evaluator. Mixture of execution and code generation. An Example. If p is a program in language L, then denotes its Meaning. The subscript L indicates how p is to be interpreted.

kalei
Télécharger la présentation

An Introduction to Partial Evaluation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. An Introduction to Partial Evaluation N. D. Jones Presented by: Changlin Sun

  2. A Partial Evaluator Mixture of execution and code generation

  3. An Example

  4. If p is a program in language L, then denotes its Meaning. The subscript L indicates how p is to be interpreted The programming meaning function is a function from several inputs to output. Results from running p on input values in1, in2, …, inn, and output is undefined if p goes into an infinite loop. The Meaning of A Program

  5. An Equational Definition of Partial Evaluation Computation in one stage: Computation in two stages Using specializer mix: An equational definition of mix:

  6. Motivation of Partial Evaluation • Speedup • Pin1 is often faster than p. • Efficiency versus generality and modularity • Small and efficient program • Efficient but much programming is needed and maintenance is difficult. • Highly parameterized program • Can solve any problem in the class but inefficient. • Highly modular programming • Easy for documentation, modification, but inefficient. • Solution using partial evaluation: • Write highly parameterized program. • Apply partial evaluation to specialize it. • Obtain customized version.

  7. Partial Evaluation Techniques • Symbolic computation • Unfolding function calls • Program point specialization • A single function or label in program p may appear in the specialized program pin1 in several specialized versions, each corresponding to data determined at partial evaluation time.

  8. An Example of Program Point Specialization

  9. Offline Specialization • Binding time analysis • Place appropriate annotations on the program • Interpretation of annotations by specializer • Evaluate all non-underlined expressions • Unfold at specialization time all non-underlined function calls • Generate residual code for all underlined expressions • Generate residual function calls for all underlined function calls.

  10. Online Specialization • The specializer computes program parts as early as possible and takes decisions “on the fly” using available information. • Often work better than offline methods, but introduce new problems concerning termination of specializers.

  11. p2`= Online Specialization (Cont.)

  12. An Algorithm of Offline Partial Evaluation • Given: (1) a first-order functional program of form f1(s, d) = expression1 g(u, v, …) = expression2 … h(r, s, …) = expressionm (2) Annotations that mark every function parameter, operation, test, and function call as either eliminable (perform or compute or unfold during specialization) or residual (generate program code to appear in the specialized program).

  13. An Algorithm of Offline Partial Evaluation (Cont.) 1. Read program and S 2. Pending := {f1s}, Seenbefore := {}; 3. While pending  {} do 4 – 6 4. Choose and remove a pair gstatvalues from Pending, and add it to Seenbefore if not already there. 5. Find g’s definition, g(x1, x2, …) = g-expression and let D1, …, Dm be all its dynamic parameters. 6. Generate and append to target the definition: gstatvalues (D1, …, Dm) = Reduce(E); where E is the result of substituting static value Si from statvalues in place of each static g-parameter xi occurring in g-expression.

  14. An Algorithm of Offline Partial Evaluation (Cont.) 1. If E is constant or a dynamic parameter of g, then RE = E. 2. If E is a static parameter of g then RE = its value. 3. If E is of form: operator(E1, …, En) then compute the value v1, …, vn of Reduced(E1), …, Reduce(En). Then set RE = the value of operator applied to v1, …, vn. 4. If E is operator(E1, …, En) then compute E1` = Reduce(E1), …, En` = Reduce(En). Then set RE = the expression “operator(E1`, …, En`)”. Reduction of an expression E:

  15. An Algorithm of Offline Partial Evaluation (Cont.) 5. If E is if E0 then E1 else E2 then compute Reduce(E0). If Reduce(E0) equals true, then RE = Reduce(E1), otherwise RE=Reduce(E2). 6. If E is if E0 then E1 else E2 then RE=the expression “if E0` then E1` else E2`” where each Ei` equals Reduce(Ei). 7. Suppose E is f(e1, E2, …, En) and program contains definition f(x1…xn) = f-expression then RE=Reduce(E`), where E` is the result of substituting Reduce(Ei) in place of each static f-parameter xi occurring in f-expression. 8. If E is f(E1, E2, …, En), then … Reduction of an expression E:

  16. The Requirements of the Specialization Algorithm • Knowledge of which inputs will be known when the program is specialized. • Congruence condition • If any parameter or operation has been marked as eliminable, then one needs a guarantee that it actually will be so when specialization is carried out, for any possible static program inputs. • Termination condition • Neither infinitely many residual functions nor any infinitely large residual expression. • Binding time analysis • Ensure that these conditions are satisfied given an unmarked program together with a division of its inputs into static and dynamic, the binding-time analyzer proceeds to annotate the whole program.

  17. Computation in One Stage or More

  18. Interpreter: smaller and easy to write Compiler: An order of magnitude faster Compilers and Interpreters

  19. Compiling by Specialization Program target can be expected to be faster than interpreting Source since many interpreter actions depend only on source and so can be pre-computed. It shows that mix together with int can be used to compile. It does not show that mix is a compiler.

  20. Partial Evaluation versus Traditional Compiling 1. Given a language definition in the form of an operational semantics, partial evaluation eliminates the first and largest order of magnitude: the interpretation overhead. 2. The method yields target programs which are always correct with respect to the interpreter. The problem of compiler correctness seems to have vanished. 3. The partial evaluation approach is suitable for prototype implementation of new languages which are defined interpretively. 4. The generated target code is in the partial evaluator’s output language, the language in which the interpreter is written. 5. Because partial evaluation is automatic and general, its generated code may not be as good as handwritten target code.

  21. Running time of interpreter int on inputs p and d satisfy: p is a constant Optimality of mix: remove all computational overhead caused by interpretation. mix is optimal provided tp`(d)  tp(d) for all p, d, where sint is a self-interpreter and The Cost of Interpretation

  22. Apply partial evaluation to produce faster generators of specialized program. The goal is to construct an efficient program generator from A general program by completely automatic methods. Fast compiler Fast compiler generator Fast target program Generation of Program Generators:

  23. An Example of Generation of Program Generator

  24. Compiling by the First Futamura Projection mix can compile. The target program is always a specialized Form of the interpreter, and so is in mix’s output language – Usually the language in which the interpreter is written.

  25. Compiler Generation by the Second Futamura Projection This way of doing compiler generation requires that mix be written in its own input language, e.g., S=L

  26. Compiler Generator Generation by the Third Futamura Projection A compiler generator: a program that transforms interpreters into compilers. The compilers it produces are versions of mix itself, specialized to various interpreters.

  27. Speedups by Self-Application In each case the second way is often about 10 times faster than the first.

  28. Automatic Program Generation:Changing Program Style • Let program in be a self-interpreter for L. A generated is a translator from L to L written in L. • The transformer’s output is always a specialized version of the self-interpreter. • Examples: • Compiling L into a proper subset. • Automatic instrumentation. • Translating direct style program into continuation-passing style. • Translating lazy programs into equivalent eager program. • Translating direct-style programs into tail-recursive style suitable for machine code implementation.

  29. Automatic Program Generation: Hierarchies of Meta-languages

  30. Automatic Program Generation: Semantics-Directed Compiler Generation • Given a specification of a programming language, for example, a formal semantics or an interpreter, our goal is automatically and correctly to transform it into a compiler from the specified source language into another target language. • Parser generators and attribute grammar evaluators are not semantics-directed. It is entirely up to their users to ensure generation of correct target code. • The motivation for automatic compiler generation is: automatic transformation of a semantic specification into a compiler faithful to that semantics. • The three jobs of writing the language specification, writing the compiler, and showing the compiler to be correct are reduced to one: write the specification in a form suitable for the compiler generator.

  31. Automatic Program Generation:Executable Specifications • Efficient implementation of executable specifications • Compiler generation • Parser generation

  32. Problems Suitable for Program Specialization • It is time-consuming. • It must be solved repeatedly. • It is often solved with similar parameters. • It is solved by well-structured and cleanly written software. • It implements a high-level applications-oriented language.

  33. Some Applications of Program Specialization • Pattern recognition • Computer graphics • Database queries • Neural networks • Spreadsheets • Scientific computing • Parsing

  34. Automation in Binding-Time Separation • Binding-time separation is to recognize which of a program’s computations can be done at specialization time and which should be postponed to run time. • Binding-time analysis sufficient to give a congruent separation is now well automated • Automatic binding-time analysis sufficiently conservative to guarantee termination of specialization are still topics of ongoing research.

  35. Current Problems with Partial Evaluation • Applicable to limited language, require great expertise. • Need for human understanding • Greater automation and user convenience • Users should not need to give advice on unfolding or on generalization. • Ever-changing languages and systems • An ever-changing context makes systematization and automation quite hard.

  36. Conclusions • Partial evaluation and self-application have many promising applications: generating program generators, e.g., compiler and compiler generators and other program transformers. • Problems with termination of the partial evaluator and sometimes with semantic faithfulness of the specialized program to the input program. • It can be hard to predict how much speedup will be achieved by specialization, and hard to see how to modify the program to improve the speedup.

More Related