languages and compilers sprog og overs ttere concurrency and distribution n.
Skip this Video
Loading SlideShow in 5 Seconds..
Languages and Compilers (SProg og Oversættere) Concurrency and distribution PowerPoint Presentation
Download Presentation
Languages and Compilers (SProg og Oversættere) Concurrency and distribution

Languages and Compilers (SProg og Oversættere) Concurrency and distribution

198 Vues Download Presentation
Télécharger la présentation

Languages and Compilers (SProg og Oversættere) Concurrency and distribution

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. Languages and Compilers(SProg og Oversættere)Concurrency and distribution Bent Thomsen Department of Computer Science Aalborg University With acknowledgement to John Mitchell whose slides this lecture is based on.

  2. Concurrency, distributed computing, the Internet • Traditional view: • Let the OS deal with this • => It is not a programming language issue! • End of Lecture • Wait-a-minute … • Maybe “the traditional view” is getting out of date?

  3. Languages with concurrency constructs • Maybe the “traditional view” was always out of date? • Simula • Modula3 • Occam • Concurrent Pascal • ADA • Linda • CML • Facile • Jo-Caml • Java • C# • …

  4. Categories of Concurrency: • Physical concurrency - Multiple independent processors ( multiple threads of control) • Uni-processor with I/O channels (multi-programming) • Multiple CPU (parallel programming) • Network of uni- or multi- CPU machines (distributed programming) • Logical concurrency - The appearance of physical concurrency is presented by time-sharing one processor (software can be designed as if there were multiple threads of control) • Concurrency as a programming abstraction Def: A thread of control in a program is the sequence of program points reached as control flows through the program

  5. Introduction • Reasons to Study Concurrency • It involves a different way of designing software that can be very useful—many real-world situations involve concurrency • Control programs • Simulations • Client/Servers • Mobile computing • Games 2. Computers capable of physical concurrency are now widely used • High-end servers • Game consoles • Grid computing

  6. The promise of concurrency • Speed • If a task takes time t on one processor, shouldn’t it take time t/n on n processors? • Availability • If one process is busy, another may be ready to help • Distribution • Processors in different locations can collaborate to solve a problem or work together • Humans do it so why can’t computers? • Vision, cognition appear to be highly parallel activities

  7. Challenges • Concurrent programs are harder to get right • Folklore: Need an order of magnitude speedup (or more) to be worth the effort • Some problems are inherently sequential • Theory – circuit evaluation is P-complete • Practice – many problems need coordination and communication among sub-problems • Specific issues • Communication – send or receive information • Synchronization – wait for another process to act • Atomicity – do not stop in the middle and leave a mess

  8. Why is concurrent programming hard? • Nondeterminism • Deterministic: two executions on the same input it always produce the same output • Nondeterministic: two executions on the same input may produce different output • Why does this cause difficulty? • May be many possible executions of one system • Hard to think of all the possibilities • Hard to test program since some may occur infrequently

  9. Traditional C Library for concurrency System Calls - fork( ) - wait( ) - pipe( ) - write( ) - read( ) Examples

  10. Process Creation Fork( ) NAME fork() – create a new process SYNOPSIS # include <sys/types.h> # include <unistd.h> pid_t fork(void) RETURN VALUE success parent- child pid child- 0 failure -1

  11. Fork()- program structure #include <sys/types.h> #include <unistd.h> #include <stdio.h> Main() { pid_t pid; if((pid = fork())>0){ /* parent */ } else if ((pid==0){ /*child*/ } else { /* cannot fork* } exit(0); }

  12. Wait() system call Wait()- wait for the process whose pid reference is passed to finish executing SYNOPSIS #include<sys/types.h> #include<sys/wait.h> pid_t wait(int *stat)loc) The unsigned decimal integer process ID for which to wait RETURN VALUE success- child pid failure- -1 and errno is set

  13. Wait()- program structure #include <sys/types.h> #include <unistd.h>#include <stdlib.h> #include <stdio.h> Main(int argc, char* argv[]) { pid_t childPID; if((childPID = fork())==0){ /*child*/ } else { /* parent* wait(0); } exit(0); }

  14. Pipe() system call Pipe()- to create a read-write pipe that may later be used to communicate with a process we’ll fork off. SYNOPSIS Int pipe(pfd) int pfd[2]; PARAMETER Pfd is an array of 2 integers, which that will be used to save the two file descriptors used to access the pipeRETURN VALUE:0 – success;-1 – error.

  15. Pipe() - structure /* first, define an array to store the two file descriptors*/Int pipes[2];/* now, create the pipe*/int rc = pipe (pipes); if(rc = = -1) { /* pipe() failed*/ Perror(“pipe”); exit(1);} If the call to pipe() succeeded, a pipe will be created, pipes[0] will contain the number of its read file descriptor, and pipes[1] will contain the number of its write file descriptor.

  16. Write() system call Write() – used to write data to a file or other object identified by a file descriptor. SYNOPSIS #include <sys/types.h> Size_t write(int fildes, const void * buf, size_t nbyte); PARAMETER fildes is the file descriptor, buf is the base address of area of memory that data is copied from, nbyte is the amount of data to copy RETURN VALUE The return value is the actual amount of data written, if this differs from nbyte then something has gone wrong

  17. Read() system call Read() – read data from a file or other object identified by a file descriptor SYNOPSIS #include <sys/types.h> Size_t read(int fildes, void *buf, size_t nbyte); ARGUMENT fildes is the file descriptor, buf is the base address of the memory area into which the data is read, nbyte is the maximum amount of data to read. RETURN VALUE The actual amount of data read from the file. The pointer is incremented by the amount of data read.

  18. Solaris 2 Synchronization • Implements a variety of locks to support multitasking, multithreading (including real-time threads), and multiprocessing. • Uses adaptive mutexes for efficiency when protecting data from short code segments. • Uses condition variables and readers-writers locks when longer sections of code need access to data. • Uses turnstiles to order the list of threads waiting to acquire either an adaptive mutex or reader-writer lock.

  19. Windows 2000 Synchronization • Uses interrupt masks to protect access to global resources on uniprocessor systems. • Uses spinlocks on multiprocessor systems. • Also provides dispatcher objects which may act as wither mutexes and semaphores. • Dispatcher objects may also provide events. An event acts much like a condition variable.

  20. Basic question • Maybe the library approach is not such a good idea? • How can programming languages make concurrent and distributed programming easier?

  21. Language support for concurrency • Help promote good software engineering • Allowing the programmer to express solutions more closely to the problem domain • No need to juggle several programming models (Hardware, OS, library, …) • Make invariants and intentions more apparent (part of the interface and/or type system) • Allows the compiler much more freedom to choose different implementations • Base the programming language constructs on a well-understood formal model => formal reasoning may be less hard and the use tools may be possible

  22. What could languages provide? • Abstract model of system • abstract machine => abstract system • Example high-level constructs • Communication abstractions • Synchronous communication • Buffered asynchronous channels that preserve msg order • Mutual exclusion, atomicity primitives • Most concurrent languages provide some form of locking • Atomicity is more complicated, less commonly provided • Process as the value of an expression • Pass processes to functions • Create processes at the result of function call

  23. Basic issue: conflict between processes • Critical section • Two processes may access shared resource • Inconsistent behavior if two actions are interleaved • Allow only one process in critical section • Deadlock • Process may hold some locks while awaiting others • Deadlock occurs when no process can proceed

  24. Concurrency • Def: A task is disjoint if it does not communicate with or affect the execution of any other task in the program in any way • Task communication is necessary for synchronization • Task communication can be through: 1. Shared nonlocal variables 2. Parameters 3. Message passing

  25. Synchronization • Kinds of synchronization: 1. Cooperation • Task A must wait for task B to complete some specific activity before task A can continue its execution e.g., the producer-consumer problem 2. Competition • When two or more tasks must use some resource that cannot be simultaneously used e.g., a shared counter • Competition is usually provided by mutually exclusive access (approaches are discussed later)

  26. Design Issues for Concurrency: • How is cooperation synchronization provided? • How is competition synchronization provided? • How and when do tasks begin and end execution? • Are tasks statically or dynamically created? • Are there any syntactic constructs in the language? • Are concurrency construct reflected in the type system?

  27. Concurrent Pascal: cobegin/coend • Limited concurrency primitive • Example x := 0; cobegin begin x := 1; x := x+1 end; begin x := 2; x := x+1 end; coend; print(x); execute sequential blocks in parallel x := 1 x := x+1 x := 0 print(x) x := 2 x := x+1 Atomicity at level of assignment statement

  28. Mutual exclusion bill fred • Sample action procedure sign_up(person) begin number := number + 1; list[number] := person; end; • Problem with parallel execution cobegin sign_up(fred); sign_up(bill); end; bob fred

  29. Locks and Waiting <initialze concurrency control> cobegin begin <wait> sign_up(fred); // critical section <signal> end; begin <wait> sign_up(bill); // critical section <signal> end; end; Need atomic operations to implement wait

  30. Mutual exclusion primitives • Atomic test-and-set • Instruction atomically reads and writes some location • Common hardware instruction • Combine with busy-waiting loop to implement mutex • Semaphore • Avoid busy-waiting loop • Keep queue of waiting processes • Scheduler has access to semaphore; process sleeps • Disable interrupts during semaphore operations • OK since operations are short

  31. Monitor Brinch-Hansen, Dahl, Dijkstra, Hoare • Synchronized access to private data. Combines: • private data • set of procedures (methods) • synchronization policy • At most one process may execute a monitor procedure at a time; this process is said to be in the monitor. • If one process is in the monitor, any other process that calls a monitor procedure will be delayed. • Modern terminology: synchronized object

  32. Java Concurrency • Threads • Create process by creating thread object • Communication • shared variables • method calls • Mutual exclusion and synchronization • Every object has a lock (inherited from class Object) • synchronized methods and blocks • Synchronization operations (inherited from class Object) • wait : pause current thread until another thread calls notify • notify : wake up waiting threads

  33. Java Threads • Thread • Set of instructions to be executed one at a time, in a specified order • Java thread objects • Object of class Thread • Methods inherited from Thread: • start : method called to spawn a new thread of control; causes VM to call run method • suspend : freeze execution • interrupt : freeze execution and throw exception to thread • stop : forcibly cause thread to halt

  34. Example subclass of Thread class PrintMany extends Thread { private String msg; public PrintMany (String m) {msg = m;} public void run() { try { for (;;){ System.out.print(msg + “ “); sleep(10); } } catch (InterruptedException e) { return; } } (inherits start from Thread)

  35. Interaction between threads • Shared variables • Two threads may assign/read the same variable • Programmer responsibility • Avoid race conditions by explicit synchronization!! • Method calls • Two threads may call methods on the same object • Synchronization primitives • Each object has internal lock, inherited from Object • Synchronization primitives based on object locking

  36. Synchronization example • Objects may have synchronized methods • Can be used for mutual exclusion • Two threads may share an object. • If one calls a synchronized method, this locks object. • If the other calls a synchronized method on same object, this thread blocks until object is unlocked.

  37. Synchronized methods • Marked by keyword public synchronized void commitTransaction(…) {…} • Provides mutual exclusion • At most one synchronized method can be active • Unsynchronized methods can still be called • Programmer must be careful • Not part of method signature • sync method equivalent to unsync method with body consisting of a synchronized block • subclass may replace a synchronized method with unsynchronized method

  38. Join, another form of synchronization • Wait for thread to terminate class Future extends Thread { private int result; public void run() { result = f(…); } public int getResult() { return result;} } … Future t = new future; t.start() // start new thread … t.join(); x = t.getResult(); // wait and get result

  39. Aspects of Java Threads • Portable since part of language • Easier to use in basic libraries than C system calls • Example: garbage collector is separate thread • General difficulty combining serial/concur code • Serial to concurrent • Code for serial execution may not work in concurrent sys • Concurrent to serial • Code with synchronization may be inefficient in serial programs (10-20% unnecessary overhead) • Abstract memory model • Shared variables can be problematic on some implementations

  40. C# Threads • Basic thread operations • Any method can run in its own thread • A thread is created by creating a Thread object • Creating a thread does not start its concurrent execution; it must be requested through the Start method • A thread can be made to wait for another thread to finish with Join • A thread can be suspended with Sleep • A thread can be terminated with Abort

  41. C# Threads • Synchronizing threads • The Interlock class • The lock statement • The Monitor class • Evaluation • An advance over Java threads, e.g., any method can run its own thread • Thread termination cleaner than in Java • Synchronization is more sophisticated

  42. Polyphonic C# • An extension of the C# language with new concurrency constructs • Based on the join calculus • A foundational process calculus like the p-calculus but better suited to asynchronous, distributed systems • A single model which works both for • local concurrency (multiple threads on a single machine) • distributed concurrency (asynchronous messaging over LAN or WAN) • It is different • But it’s also simple – if Mort can do any kind of concurrency, he can do this

  43. In one slide: • Objects have both synchronous and asynchronous methods. • Values are passed by ordinary method calls: • If the method is synchronous, the caller blocksuntil the method returns some result (as usual). • If the method is async, the call completes at once and returns void. • A class defines a collection of chords (synchronization patterns), which define what happens once a particular set of methods have been invoked. One method may appear in several chords. • When pending method calls match a pattern, its body runs. • If there is no match, the invocations are queued up. • If there are several matches, an unspecified pattern is selected. • If a pattern containing only async methods fires, the body runs in a new thread.

  44. Extending C# with chords • Interesting well-formedness conditions: • At most one header can have a return type (i.e. be synchronous). • Inheritance restriction. • “ref” and “out” parameters cannot appear in async headers. • Classes can declare methods using generalized chord-declarations instead of method-declarations. chord-declaration ::= method-header [ & method-header ]* body method-header ::= attributes modifiers [return-type | async] name (parms)

  45. A Simple Buffer class Buffer { String get() & async put(String s) { return s; } } • Calls to put() return immediately (but are internally queued if there’s no waiting get()). • Calls to get() block until/unless there’s a matching put() • When there’s a match the body runs, returning the argument of the put() to the caller of get(). • Exactly which pairs of calls are matched up is unspecified.

  46. OCCAM • Program consists of processes and channels • Process is code containing channel operations • Channel is a data object • All synchronization is via channels • Formal foundation based on CSP

  47. Channel Operations in OCCAM • Read data item D from channel C • D ? C • Write data item Q to channel C • Q ! C • If reader accesses channel first, wait for writer, and then both proceed after transfer. • If writer accesses channel first, wait for reader, and both proceed after transfer.

  48. Concurrent ML • Threads • New type of entity • Communication • Synchronous channels • Synchronization • Channels • Events • Atomicity • No specific language support

  49. Threads • Thread creation • spawn : (unit  unit)  thread_id • Example code CIO.print "begin parent\n"; spawn (fn () => (CIO.print "child 1\n";)); spawn (fn () => (CIO.print "child 2\n";)); CIO.print "end parent\n“ • Result child 1 begin parent child 2 end parent

  50. Channels • Channel creation • channel : unit  ‘a chan • Communication • recv : ‘a chan  ‘a • send : ( ‘a chan * ‘a )  unit • Example ch = channel(); spawn (fn()=> … <A> … send(ch,0); … <B> …); spawn (fn()=> … <C> … recv ch; … <D> …); • Result <A> <B> send/recv <C> <D>