1 / 19

Programming Paradigms for C oncurrency

Programming Paradigms for C oncurrency. Pavol Černý , Vasu Singh, Thomas Wies. Programming Paradigms for Concurrency. Three parts covering three major paradigms. Classical shared memory programming Pavol Černý Programming with transactional memories Vasu Singh

jola
Télécharger la présentation

Programming Paradigms for C oncurrency

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Programming Paradigms for Concurrency PavolČerný, Vasu Singh, Thomas Wies Art of Multiprocessor Programming

  2. Programming Paradigms for Concurrency • Three parts covering three major paradigms. • Classical shared memory programming • PavolČerný • Programming with transactional memories • Vasu Singh • Message-passing programming • Thomas Wies

  3. Administrivia • Course webpage • http://pub.ist.ac.at/courses/ppc10/ • Feel free to contact the instructors • firstname.lastname@ist.ac.at • Current plan is 6 homework assignements • two per course part • Class project (much more on this today) • Grades: 60% course project, 40% homework • Register at gradschool@ist.ac.at

  4. Programming Paradigms for ConcurrencyPart I: Shared Memory Programming PavolČerný Art of Multiprocessor Programming

  5. Mutual Exclusion Accessing a shared resource: Flag is raised means “I am going to usea shared resource” no access Flag is lowered means “I am not using a shared resource” access

  6. Mutual Exclusion Alice Bob flag[0] ←0 flag[1] ←0 no access no access flag[1] = 0 flag[0] = 0 flag[0] ← 0 flag[0] ← 0 test test flag[0] ← 1 flag[1] ← 1 Boom! access access

  7. Mutual Exclusion: Attempt 2 Alice Bob flag[0] ←0 flag[1] ←0 no access no access flag[0] ← 1 flag[1] ← 1 flag[0] ← 0 flag[0] ← 0 request request flag[0] = 0 flag[1] = 0 Now what?! access access

  8. Mutual Exclusion: Attempt 3 Alice Bob flag[0] ←0 flag[1] ←0 no access no access flag[0] ← 1 turn ← 1 flag[1] ← 1 turn ← 0 flag[0] ← 0 flag[0] ← 0 request request flag[0] = 0 or turn = 1 flag[1] = 0 or turn = 0 OK, works! access access

  9. Mutual exclusion • Questions to ponder: • Can we make do with two shared bits (instead of three)? • How can one extend this idea to n processes? • Does the algorithm work in Java? • Run it and see. What is the problem? • Where is the fault in our proof?

  10. Schedule

  11. How many of you have seen…? Have you programmed concurrent programs? In Java? In pthreads? … Bakery algorithm? Queue locks? Linearizability? Sequential consistency? compareAndSet? Concurrent Hashtables?

  12. Projects • Topic: • good: choose from among our suggestions • better: define your own project • On your own or in groups of two • Pick a project by: before Christmas • Progress report 1: January 15th (2 pages) • Presentation and final report: January 27th and February 3rd (final report: 4 pages)

  13. Project 1: Irregular data parallelism Cavity Example: Delaunay mesh refinement Effects of updates are local, but not bounded statically (“irregular”). Can we still exploit locality for parallelism?

  14. Project 1: Irregular data parallelism Locality of effects: Mesh retriangulation http://iss.ices.utexas.edu/lonestar/index.html

  15. Project 1: Irregular data parallelism Lonestar benchmark suite: http://iss.ices.utexas.edu/lonestar/index.html Barnes-Hut N-body Simulation Delaunay Mesh refinement Focused communities Delaunay triangulation … Project: 1. pick one of these applications, 2. find a good (possibly novel) way of parallelizing it, 3. implement it (by modifying the sequential implementation provided in Lonestar benchmarks), 4. confirm improvement in running time by experimentation.

  16. Project 2: Deductive verification: proving a concurrent data structure correct Pick an implementation of a concurrent data structure: a stack, a queue, a set, .. Pick a theorem prover or a verification tool: for example: PVS or QED Prove that the implementation is linearizable P1: remove(7) 7 3 9 5 P2: remove(5)

  17. Project 2: Deductive verification: proving a concurrent data structure correct P1: remove(7) 7 3 9 5 References: R. Colvin, L. Groves, V. Luchangco, M. Moir: Formal Verification of a Lazy Concurrent List-Based Set Algorithm. CAV 2006 T. Elmas, S. Qadeer, A. Sezgin, O. Subasi, S. Tasiran: Simplifying Linearizability Proofs with Reduction and Abstraction. TACAS 2010. P2: remove(5)

  18. Project 3:Performance measurement/ performance model for concurrent programs • Pick a problem with at least three-four different solutions • Lock implementations • Data structures: queues, stacks, sets… • Examine the performance of the solutions in different settings: • small number of threads vs large number of threads • 2 cores, small amount of memory (laptop) vs 8 cores, large memory/cache (server) • different usage models • input that generates little vs input that generates lots of contention • 3a. Find a hybrid solution that works well in a particular setting • or • 3b. Find a performance model that explains the data

  19. Project 4: Your own!

More Related