html5-img
1 / 72

Languages and Compilers (SProg og Oversættere)

Languages and Compilers (SProg og Oversættere). Bent Thomsen Department of Computer Science Aalborg University. With acknowledgement to John Mitchell whose slides this lecture is based on. Concurrency, distributed computing, the Internet. Traditional view: Let the OS deal with this

lainey
Télécharger la présentation

Languages and Compilers (SProg og Oversættere)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Languages and Compilers(SProg og Oversættere) Bent Thomsen Department of Computer Science Aalborg University With acknowledgement to John Mitchell whose slides this lecture is based on.

  2. Concurrency, distributed computing, the Internet • Traditional view: • Let the OS deal with this • => It is not a programming language issue! • End of Lecture • Wait-a-minute … • Maybe “the traditional view” is getting out of date?

  3. Languages with concurrency constructs • Maybe the “traditional view” was always out of date? • Simula • Modula3 • Occam • Concurrent Pascal • ADA • Linda • CML • Facile • Jo-Caml • Java • C# • …

  4. Possibilities for Concurrency Architecture: Architecture: • Uniprocessor with: • I/O channel • I/o processor • DMA Multiprogramming, multiple process system Programs Network of uniprocessors Distributed programming Multiple CPU’s Parallel programming

  5. The promise of concurrency • Speed • If a task takes time t on one processor, shouldn’t it take time t/n on n processors? • Availability • If one process is busy, another may be ready to help • Distribution • Processors in different locations can collaborate to solve a problem or work together • Humans do it so why can’t computers? • Vision, cognition appear to be highly parallel activities

  6. Challenges • Concurrent programs are harder to get right • Folklore: Need an order of magnitude speedup (or more) to be worth the effort • Some problems are inherently sequential • Theory – circuit evaluation is P-complete • Practice – many problems need coordination and communication among sub-problems • Specific issues • Communication – send or receive information • Synchronization – wait for another process to act • Atomicity – do not stop in the middle and leave a mess

  7. Why is concurrent programming hard? • Nondeterminism • Deterministic: two executions on the same input it always produce the same output • Nondeterministic: two executions on the same input may produce different output • Why does this cause difficulty? • May be many possible executions of one system • Hard to think of all the possibilities • Hard to test program since some may occur infrequently

  8. Concurrency and programming Languages • Library view • All(most all) concurrency constructs are packaged as library functions/methods • FORTRAN, C, Lisp, CML, Java (except for a few keywords like synchronized) • Language view • Syntactic manifestation in the language • Concurrent Pascal • Occam • ADA • Facile • Erlang • …

  9. Library view in C System Calls - fork( ) - wait( ) - pipe( ) - write( ) - read( ) Examples

  10. Process Creation Fork( ) NAME fork() – create a new process SYNOPSIS # include <sys/types.h> # include <unistd.h> pid_t fork(void) RETURN VALUE success parent- child pid child- 0 failure -1

  11. Fork() system call- example #include <sys/types.h> #include <unistd.h> #include <stdio.h> Main() { printf(“[%ld] parent process id: %ld\n”, getpid(), getppid()); fork(); printf(“\n[%ld] parent process id: %ld\n”, getpid(), getppid()); }

  12. Fork() system call- example [17619] parent process id: 12729 [17619] parent process id: 12729 [2372] parent process id: 17619

  13. Fork()- program structure #include <sys/types.h> #include <unistd.h> #include <stdio.h> Main() { pid_t pid; if((pid = fork())>0){ /* parent */ } else if ((pid==0){ /*child*/ } else { /* cannot fork* } exit(0); }

  14. Wait() system call Wait()- wait for the process whose pid reference is passed to finish executing SYNOPSIS #include<sys/types.h> #include<sys/wait.h> pid_t wait(int *stat)loc) The unsigned decimal integer process ID for which to wait RETURN VALUE success- child pid failure- -1 and errno is set

  15. Wait()- program structure #include <sys/types.h> #include <unistd.h>#include <stdlib.h> #include <stdio.h> Main(int argc, char* argv[]) { pid_t childPID; if((childPID = fork())==0){ /*child*/ } else { /* parent* wait(0); } exit(0); }

  16. Pipe() system call Pipe()- to create a read-write pipe that may later be used to communicate with a process we’ll fork off. SYNOPSIS Int pipe(pfd) int pfd[2]; PARAMETER Pfd is an array of 2 integers, which that will be used to save the two file descriptors used to access the pipeRETURN VALUE:0 – success;-1 – error.

  17. Pipe() - structure /* first, define an array to store the two file descriptors*/Int pipe[2];/* now, create the pipe*/int rc = pipe (pipes); if(rc = = -1) { /* pipe() failed*/ Perror(“pipe”); exit(1);} If the call to pipe() succeeded, a pipe will be created, pipes[0] will contain the number of its read file descriptor, and pipes[1] will contain the number of its write file descriptor.

  18. Write() system call Write() – used to write data to a file or other object identified by a file descriptor. SYNOPSIS #include <sys/types.h> Size_t write(int fildes, const void * buf, size_t nbyte); PARAMETER fildes is the file descriptor, buf is the base address of area of memory that data is copied from, nbyte is the amount of data to copy RETURN VALUE The return value is the actual amount of data written, if this differs from nbyte then something has gone wrong

  19. Read() system call Read() – read data from a file or other object identified by a file descriptor SYNOPSIS #include <sys/types.h> Size_t read(int fildes, void *buf, size_t nbyte); ARGUMENT fildes is the file descriptor, buf is the base address of the memory area into which the data is read, nbyte is the maximum amount of data to read. RETURN VALUE The actual amount of data read from the file. The pointer is incremented by the amount of data read.

  20. Solaris 2 Synchronization • Implements a variety of locks to support multitasking, multithreading (including real-time threads), and multiprocessing. • Uses adaptive mutexes for efficiency when protecting data from short code segments. • Uses condition variables and readers-writers locks when longer sections of code need access to data. • Uses turnstiles to order the list of threads waiting to acquire either an adaptive mutex or reader-writer lock.

  21. Windows 2000 Synchronization • Uses interrupt masks to protect access to global resources on uniprocessor systems. • Uses spinlocks on multiprocessor systems. • Also provides dispatcher objects which may act as wither mutexes and semaphores. • Dispatcher objects may also provide events. An event acts much like a condition variable.

  22. Basic question • Maybe the library approach is not such a good idea? • How can programming languages make concurrent and distributed programming easier?

  23. Language support for concurrency • Make invariants and intentions more apparent (part of the interface) • Good software engineering • Allows the compiler much more freedom to choose different implementations • Also helps other tools

  24. What could languages provide? • Abstract model of system • abstract machine => abstract system • Example high-level constructs • Process as the value of an expression • Pass processes to functions • Create processes at the result of function call • Communication abstractions • Synchronous communication • Buffered asynchronous channels that preserve msg order • Mutual exclusion, atomicity primitives • Most concurrent languages provide some form of locking • Atomicity is more complicated, less commonly provided

  25. Basic issue: conflict between processes • Critical section • Two processes may access shared resource • Inconsistent behavior if two actions are interleaved • Allow only one process in critical section • Deadlock • Process may hold some locks while awaiting others • Deadlock occurs when no process can proceed

  26. Cobegin/coend • Limited concurrency primitive • Example x := 0; cobegin begin x := 1; x := x+1 end; begin x := 2; x := x+1 end; coend; print(x); execute sequential blocks in parallel x := 1 x := x+1 x := 0 print(x) x := 2 x := x+1 Atomicity at level of assignment statement

  27. Mutual exclusion bill fred • Sample action procedure sign_up(person) begin number := number + 1; list[number] := person; end; • Problem with parallel execution cobegin sign_up(fred); sign_up(bill); end; bob fred

  28. Locks and Waiting <initialze concurrency control> cobegin begin <wait> sign_up(fred); // critical section <signal> end; begin <wait> sign_up(bill); // critical section <signal> end; end; Need atomic operations to implement wait

  29. Mutual exclusion primitives • Atomic test-and-set • Instruction atomically reads and writes some location • Common hardware instruction • Combine with busy-waiting loop to implement mutex • Semaphore • Avoid busy-waiting loop • Keep queue of waiting processes • Scheduler has access to semaphore; process sleeps • Disable interrupts during semaphore operations • OK since operations are short

  30. Monitor Brinch-Hansen, Dahl, Dijkstra, Hoare • Synchronized access to private data. Combines: • private data • set of procedures (methods) • synchronization policy • At most one process may execute a monitor procedure at a time; this process is said to be in the monitor. • If one process is in the monitor, any other process that calls a monitor procedure will be delayed. • Modern terminology: synchronized object

  31. OCCAM • Program consists of processes and channels • Process is code containing channel operations • Channel is a data object • All synchronization is via channels • Formal foundation based on CSP

  32. Channel Operations in OCCAM • Read data item D from channel C • D ? C • Write data item Q to channel C • Q ! C • If reader accesses channel first, wait for writer, and then both proceed after transfer. • If writer accesses channel first, wait for reader, and both proceed after transfer.

  33. Tasking in Ada • Declare a task type • The specification gives the entries • task type T is entry Put (data : in Integer); entry Get (result : out Integer);end T; • The entries are used to access the task

  34. Declaring Task Body • Task body gives actual code of task • task body T is x : integer; -- local per thread declarationbegin … accept Put (M : Integer) do … end Put; …end T;

  35. Creating an Instance of a Task • Declare a single task • X : T; • or an array of tasks • P : array (1 .. 50) of T; • or a dynamically allocated task • type AT is access T; • P : AT;…P := new T;

  36. Task execution • Each task executes independently, until • an accept call • wait for someone to call entry, then proceed with rendezvous code, then both tasks go on their way • an entry call • wait for addressed task to reach corresponding accept statement, then proceed with rendezvous, then both tasks go on their way.

  37. More on the Rendezvous • During the Rendezvous, only the called task executes, and data can be safely exchanged via the entry parameters • If accept does a simple assignment, we have the equivalent of a simple CSP channel operation, but there is no restriction on what can be done within a rendezvous

  38. Termination of Tasks • A task terminates when it reaches the end of the begin-end code of its body. • Tasks may either be very static (create at start of execution and never terminate) • Or very dynamic, e.g. create a new task for each new radar trace in a radar system.

  39. The Delay Statement • Delay statements temporarily pause a task • Delay xyz • where xyz is an expression of type duration causes execution of the thread to be delayed for (at least) the given amount of time • Delay until tim • where tim is an expression of type time, causes execution of the thread to be delayed until (at the earliest) the given tim

  40. Selective Accept • Select statement allows a choice of actions • select entry1 (…) do .. end;or when bla entry2 (…);or delay ...ddd...;end select; • Take whichever open entry arrives first, or if none arrives by end of delay, do …ddd…stmts.

  41. Timed Entry Call • Timed Entry call allows timeout to be set • select entry-call-statement or delay xxx; …end select • We try to do the entry call, but if the task won’t accept in xxx time, then do the delay stmts.

  42. Conditional Entry Call • Make a call only if it will be accepted • select entry-call ..else statementsend select; • If entry call is accepted immediately, fine, otherwise execute the else statements.

  43. Task Abort • Unconditionally terminate a task • abort taskname; • task is immediately terminated • (it is allowed to do finalization actions) • but whatever it was doing remains incomplete • code that can be aborted must be careful to leave things in a coherent state if that is important!

  44. Java Concurrency • Threads • Create process by creating thread object • Communication • shared variables • method calls • Mutual exclusion and synchronization • Every object has a lock (inherited from class Object) • synchronized methods and blocks • Synchronization operations (inherited from class Object) • wait : pause current thread until another thread calls notify • notify : wake up waiting threads

  45. Java Threads • Thread • Set of instructions to be executed one at a time, in a specified order • Java thread objects • Object of class Thread • Methods inherited from Thread: • start : method called to spawn a new thread of control; causes VM to call run method • suspend : freeze execution • interrupt : freeze execution and throw exception to thread • stop : forcibly cause thread to halt

  46. Example subclass of Thread class PrintMany extends Thread { private String msg; public PrintMany (String m) {msg = m;} public void run() { try { for (;;){ System.out.print(msg + “ “); sleep(10); } } catch (InterruptedException e) { return; } } (inherits start from Thread)

  47. Interaction between threads • Shared variables • Two threads may assign/read the same variable • Programmer responsibility • Avoid race conditions by explicit synchronization!! • Method calls • Two threads may call methods on the same object • Synchronization primitives • Each object has internal lock, inherited from Object • Synchronization primitives based on object locking

  48. Synchronization example • Objects may have synchronized methods • Can be used for mutual exclusion • Two threads may share an object. • If one calls a synchronized method, this locks object. • If the other calls a synchronized method on same object, this thread blocks until object is unlocked.

  49. Synchronized methods • Marked by keyword public synchronized void commitTransaction(…) {…} • Provides mutual exclusion • At most one synchronized method can be active • Unsynchronized methods can still be called • Programmer must be careful • Not part of method signature • sync method equivalent to unsync method with body consisting of a synchronized block • subclass may replace a synchronized method with unsynchronized method

  50. Join, another form of synchronization • Wait for thread to terminate class Future extends Thread { private int result; public void run() { result = f(…); } public int getResult() { return result;} } … Future t = new future; t.start() // start new thread … t.join(); x = t.getResult(); // wait and get result

More Related