1 / 30

Lecture 19: Programming Concepts for Parallel Computers

Explore the challenges and possibilities of programming languages for parallel computers, with a focus on concurrency and parallelism. Includes a discussion on the use of concurrency primitives like fork and join.

fouse
Télécharger la présentation

Lecture 19: Programming Concepts for Parallel Computers

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 19: ||ism I don’t think we have found the right programming concepts for parallel computers yet. When we do, they will almost certainly be very different from anything we know today. Birch Hansen, “Concurrent Pascal” (last sentence), HOPL 1993 My only serious debate with your account is with the very last sentence. I do not believe there is any “right” collection of programming concepts for parallel (or even sequential) computers. The design of a language is always a compromise, in which the designer must take into account the desired level of abstraction, the target machine architecture, and the proposed range of applications. C. A. R. Hoare, comment at HOPL II 1993. David Evans http://www.cs.virginia.edu/~evans CS655: Programming Languages University of Virginia Computer Science

  2. Menu • Readings Policy • Challenge Problem (Lecture 17) • Techniques for Concurrent Programming • Definitions • Understanding concurrency primitives University of Virginia CS 655

  3. Remaining Readings • Always read the abstract • Read the rest if it seems interesting to you • Use the time you save not having required readings to: • Work on your course project • Work on your research • We’ve covered enough to: • Decide if you are interested in research related to programming languages • Give you a solid enough background to not embarrass yourself University of Virginia CS 655

  4. Challenge Problem • Prove or disprove: frag1  i := 1; while i < n do x := x * i; i := i + 1; end; i := 0; is observationally equivalent to: frag2  i := n; while i > 0 do x := x * i; i := i - 1; end; i := 0; when n >= 0. • Possible approaches • Use fixed point machinery to get meaning of both as    functions and show they are equivalent • Use induction to get meaning of both for given n, (, n)   and show they are the same for all n • Use induction to show frag1(n = 0) defines the same    function as frag2(n=0), and if frag1(n) is equivalent to frag2 (n) then frag1(n+1) is equivalent to frag2(n+1) University of Virginia CS 655

  5. Sequential Programming • So far, most languages we have seen provide a sequential programming model: • Language definition specifies a sequential order of execution • Language implementation may attempt to parallelize programs, but they must behave as though they are sequential • Exceptions: Algol68, Ada, Java include support for concurrency University of Virginia CS 655

  6. Definitions • Concurrency – any model of computation supporting partially ordered time. (Semantic notion) • Parallelism – hardware that can execute multiple threads simultaneously • Concurrent program my be executed without parallelism; hardware may provide parallelism without concurrency University of Virginia CS 655

  7. Parallelism without Concurrency • Smart compilers can figure out how to implement a sequential program in parallel. • Every parallel computation can be executed sequentially. University of Virginia CS 655

  8. Concurrent Programming Languages • Expose parallelism to programmer • Some problems are clearer to program using explicit parallelism • Modularity • Don’t have to explicitly interleave code for different abstractions • High-level interactions – synchronization, communication • Modelling • Closer map to real world problems • Provide performance benefits of parallelism when compile could not find it automatically University of Virginia CS 655

  9. Fork & Join • Concurrency Primitives: • forkE ThreadHandle • Creates a new thread that evaluates Expression E; returns a unique handle identifying that thread. • joinT • Waits for thread identified by ThreadHandle T to complete. University of Virginia CS 655

  10. Bjarfk (BARK with Fork & Join) Program ::= Instruction* Program is a sequence of instructions Instructions are numbered from 0. Execution begins at instruction 0, and completes with the initial thread halts. Instruction ::= Loc := ExpressionLoc gets the value of Expression | Loc := FORK Expression Loc gets the value of the ThreadHandle returned by FORK; Starts a new thread at instruction numbered Expression. | JOIN Expression Waits until thread associated with ThreadHandle Expression completes. | HALT Stop thread execution. Expression ::= Literal | Expression + Expression | Expression * Expression University of Virginia CS 655

  11. Bjarfk Program Atomic instructions: a1: R0 := R0 + 1 a2: R0 := R0 + 2 x3: R0 := R0 * 3 Partial Ordering: a1 <= x3 So possible results are, (a1, a2, x3) = 12 (a2, a1, x3) = 9 (a1, x3, a2) = 12 What if assignment instructions are not atomic? [0] R0 := 1 [1] R1 := FORK 10 [2] R2 := FORK 20 [3] JOIN R1 [4] R0 := R0 * 3 [5] JOIN R2 [6] HALT % result in R0 [10] R0 := R0 + 1 [11] HALT [20] R0 := R0 * 2 [21] HALT University of Virginia CS 655

  12. What formal tool should be use to understand FORK and JOIN? University of Virginia CS 655

  13. Operational Semantics Game Real World Abstract Machine Program Initial Configuration Input Function Intermediate Configuration Transition Rules Intermediate Configuration Answer Final Configuration Output Function University of Virginia CS 655

  14. Structured Operational Semantics SOS for a language is five-tuple: CSet of configurations for an abstract machine  Transition relation (subset of C x C) I Program  C (input function) F Set of final configurations OF  Answer (output function) University of Virginia CS 655

  15. Sequential Configurations Configuration defined by: • Array of Instructions • Program counter • Values in registers (any integer) C = Instructions x PC x RegisterFile …. …. Instruction[-1] Register[-1] Instruction[0] Register[0] PC Instruction[1] Register[1] Instruction[2] Register[2] …. …. University of Virginia CS 655

  16. Concurrent Configurations Configuration defined by: • Array of Instructions • Array of Threads Thread = < ThreadHandle, PC > • Values in registers (any integer) C = Instructions x Threads x RegisterFile …. …. Instruction[-1] Register[-1] Instruction[0] Register[0] Thread 1 Instruction[1] Register[1] Instruction[2] Register[2] Thread 2 …. …. Architecture question: Is this SIMD/MIMD/SISD/MISD model? University of Virginia CS 655

  17. Input Function: I: Program  C C = Instructions x Threads x RegisterFile where For a Program with n instructions from 0 to n - 1:Instructions[m] = Program[m] for m >= 0 && m < n Instructions[m] = ERROR otherwise RegisterFile[n] = 0 for all integers n Threads = [ <0, 0> ] The top thread (identified with ThreadHandle = 0) starts at PC = 0. University of Virginia CS 655

  18. Final Configurations F = Instructions x Threads x RegisterFile where <0, PC>  Threads and Instructions[PC] = HALT Different possibility: F = Instructions x Threads x RegisterFile where for all <t, PCt>  Threads, Instructions[PCt] = HALT University of Virginia CS 655

  19. Note: need rule to deal with Loc := Expression also; can rewrite until we have a literal on RHS. Assignment <t, PCt>  Threads & Instructions[PCt] = Loc := Value < Instructions x Threads x RegisterFile >  < Instructions x Threads’ x RegisterFile’ > where Threads = Threads – {<t, PCt>} + {<t, PCt + 1} RegisterFile’[n] = RegisterFile[n] if n  Loc RegisterFile’[n] = value of Value if n  Loc University of Virginia CS 655

  20. Fork <t, PCt>  Threads & Instructions[PCt] = Loc := FORK Literal < Instructions x Threads x RegisterFile >  < Instructions x Threads’ x RegisterFile’ > where Threads = Threads – {<t, PCt>} + {<t, PCt + 1} + { <nt, Literal> } where <nt, x> Threads for all possible x. RegisterFile’[n] = RegisterFile[n] if n  Loc RegisterFile’[n] = value of ThreadHandle nt if n  Loc University of Virginia CS 655

  21. Join <t, PCt>  Threads & Instructions[PCt] = JOINValue & <v, PCv>  Threads & Instructions[PCv ] = HALT & v = value of Value < Instructions x Threads x RegisterFile >  < Instructions x Threads’ x RegisterFile > where Threads = Threads – {<t, PCt>} + {<t, PCt + 1} University of Virginia CS 655

  22. What else is needed? • Can we build all the useful concurrency primitives we need using FORK and JOIN? • Can we implement a semaphore? • No, need an atomic test and acquire operation University of Virginia CS 655

  23. Locking Statements Program ::= LockDeclaration* Instruction* LockDeclaration ::= PROTECT LockHandleLoc Prohibits reading or writing location Loc in a thread that does not hold the loc LockHandle. Instruction ::= ACQUIRE LockHandle Acquires the lock identified by LockHandle. If another thread has acquired the lock, thread stalls until lock is available. Instruction ::= RELEASE LockHandle Releases the lock identified by LockHandle. University of Virginia CS 655

  24. Locking Semantics C = Instructions x Threads x RegisterFile x Lockswhere Locks = { < LockHandle, ThreadHandle  free, Loc } I: Program  C same as before with Locks = { <LockHandle, free, Loc> | PROTECT LockHandle Loc  LockDeclarations } University of Virginia CS 655

  25. Acquire <t, PCt>  Threads & Instructions[PCt] = ACQUIRE LockHandle & { < LockHandle, free, S> }  Locks < Instructions x Threads x RegisterFile x Locks >  < Instructions x Threads’ x RegisterFile x Locks’ > where Threads = Threads – {<t, PCt>} + {<t, PCt + 1}; Locks’= Locks – {< LockHandle, free, S>} + {<LockHandle, t, S> } University of Virginia CS 655

  26. Release <t, PCt>  Threads & Instructions[PCt] = RELEASE LockHandle & { < LockHandle, t, S> }  Locks < Instructions x Threads x RegisterFile x Locks >  < Instructions x Threads’ x RegisterFile x Locks’ > where Threads = Threads – {<t, PCt>} + {<t, PCt + 1}; Locks’= Locks – {< LockHandle, t, S>} + {<LockHandle, free, S> } University of Virginia CS 655

  27. New Assignment Rule <t, PCt>  Threads & Instructions[PCt] = Loc := Value & ({ < LockHandle, t, Loc> }  Locks | x { < LockHandle, x, Loc> }  Locks same as old assignment University of Virginia CS 655

  28. Abstractions • Can we describe all the concurrency abstractions in Finkel’s chapter using our primitives? • Binary semaphore: equivalent to our ACQUIRE/RELEASE • Monitor: abstraction using a lock • But: no way to set thread priorities with our mechanisms (operational semantics gives no guarantees about which rule is used when multiple rules match) University of Virginia CS 655

  29. Summary • Hundreds of different concurrent programming languages • [Bal, Steiner, Tanenbaum 1989] lists over 200 papers on 100 different concurrent languages! • Primitives are easy (fork, join, acquire, release), finding the right abstractions is hard University of Virginia CS 655

  30. Charge • Linda Papers • Describes an original approach to concurrent programming • Basis for Sun’s JavaSpaces technology (framework for distributed computing using Jini) • Project progress • You should have working implementations this week • Schedule a meeting with me if you are behind schedule University of Virginia CS 655

More Related