1 / 34

Concurrency

Concurrency. What is it?. What is concurrency?. Concurrency is not a new idea… Concurrency was first developed by the ancient Babylonians. They thought about stuff, walked, and chewed gum … all at the same time.

tomai
Télécharger la présentation

Concurrency

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Concurrency What is it?

  2. What is concurrency? • Concurrency is not a new idea… • Concurrency was first developed by the ancient Babylonians. • They thought about stuff, walked, and chewed gum … all at the same time. • Sometimes they even updated their Facebook status, posted a cat video on YouTube, and played Space Invaders all while singing along to the song being played on iTunes. • The above are actually good examples of how an operating system is concurrent.

  3. No, Seriously … What is concurrency? More generally … • Concurrency is defined as a simultaneous occurrence. • In Computer Science terms, a program is concurrent if it may have more than one thread of control. • Put another way, a system is said to be concurrent if two or more tasks may be underway, at an unpredictable point in their execution, at the same time. • Much of the theoretical groundwork for concurrency (related to computer science) was laid in the 1960’s. (Algol 68 includes concurrent programming features).

  4. Motivations There are 3 important motivations for concurrency… • To capture the logical structure of a program. Many programs need to track of more than one largely independent task. (Servers, graphical applications) • To exploit extra processors, for speed. Multiple processors used to primarily exist on servers and supercomputers. Now multicore processors are ubiquitous, and code needs to be written or rewritten to make use of this advancement. • To cope with separate physical devices. Applications that run across the Internet or a LAN are inherently concurrent. Likewise many embedded applications have separate processors for each of several devices (i.e. automobile control systems).

  5. What else? • A concurrent system is said to be parallel if more than one task can be physically active at once; this requires more than one processor. • A parallel system is distributed if its processors are associated with people or devices that are physically separated from one another in the real world. • Given these definitions: “concurrent” applies to all 3 of the motivations on the previous slide, “parallel” applies to the 2nd and 3rd and “distributed” applies only to the third.

  6. So aren’t concurrent and parallel really the same thing? • Semantically yes, there is no difference between true parallelism and the “quasi-parallelism” of a system that switches between tasks at unpredictable times. • The difference comes with implementation and performance. • In terms of performance the difference is obvious. One processor cannot complete the same amount of work per unit of time as two processors working concurrently. • In terms of implementation the complexity rises as you go up the layers of software design. It’s comparatively easy to exploit at the level of circuits and gates, where signals can propagate down thousands of connections at once. It’s harder to determine what work should be done by which task and how tasks should coordinate as you approach the higher levels of implementation.

  7. The introduction of the multicore processor • Significance: • For a long time the focus of multithreaded programming was to find more and better ways to exploit instruction-levelparallelism. A limit to this was reached shortly after the turn of the century. • At the next level, vector parallelism is available in programs that perform operations repeatedly on every element of a very large data set. • Given the rise of the multicore processor, a coarser-grain thread-level parallelism is required. Rather than being hidden implementation detail, parallelism must now be explicitly written into the high-level program structure.

  8. Using concurrency Why do we need it?

  9. It’s hip to be a square! • The way to start concurrent tasks in Java is by starting a new Thread. • Consider a graphics program that, upon the click of a button, creates a square at a random Y location on the screen and tells it to move forward. • If you wanted new squares to be created and move forward each time you clicked the button, what considerations might you need to make in order to achieve this without using concurrency? • How would this be simplified if threads were introduced? • The hipToBeSquare example shows how only a few lines of code are needed to perform this task using threads.

  10. What’s the difference? • Consider a graphics program which creates a ball at a random X location on the top of the screen and then tells it to start dropping. At a random time interval the program should similarly create a new ball at a random X location. Every time a subsequent ball is added, it should join the rest of the balls in dropping. • How would this be implemented without using concurrency? • How would it be implemented with using concurrency? • Are there any advantages/disadvantages to using one method over the other?

  11. Work and play • The workPen example shows how we need to consider the fact that more and more items will be added to the canvas in our implementation. • The playPen trivializes this fact by using threads. A ball simply need be created and told to run. After this point, if interaction between objects can be neglected, the ball can be forgotten about and the program can continue. • Do you notice anything strange about the way my balls drop in the playPen examples as compared to in workPen?

  12. So What’s the benefit? • Using concurrency can trivialize many tasks which would otherwise be an enormous headache, or even impossible. • The hipToBeSquare example shows how few lines of code are needed to do a simple task which would otherwise require further thought and planning. • The playPen and workPen examples show how making use of threads can dramatically speed up the execution of certain tasks. • There must be a downfall to threads if they make things so easy …

  13. Considerations Race Conditions, Deadlocks, Synchronization and More!

  14. NO! I WAS HERE FIRST! • There are special considerations when threads share data. • One such consideration is a race condition, in which the outcome of a program is dependent on which thread finishes, or reaches a certain part of its code, first. • Consider the following real-world example, in which a refrigerator only has enough room for one of each item: • Check to see if there’s milk in the fridge • If no milk in fridge, go to the store. • Buy milk. • Return home. • Place milk in the fridge.

  15. Ever heard of a phone call?! What happens when two room mates run their milk code at the same time? • Person A: • getThirsty(); • If(!milkInFridge) • goToStore(); • Buy(milk); • returnHome(); • putMilkInFridge; • Person B • … • singInTheShower() • getThirsty(); • If(!milkInFridge) • goToStore(); • Buy(milk); • returnHome(); • putMilkInFridge;

  16. Don’t cry over spilled milk • If Person A runs their code and then Person B runs their code then there’s no problem. When Person B checks to see if there is milk in the fridge, he/she will find that Person A has already stocked the milk. • Even when executing concurrently, things might work out ok. This is no guarantee, however, and the operating system might decide that the Person A thread needs a break while at the store, and allow Person B to check the fridge before Person A gets back. • We noticed this in the playPen example when the balls didn’t drop at the same rate, even though the code said they should have. • Synchronization is one solution for this.

  17. Everyone is a sports car • podRacing shows how a few minor differences in the way we handle threads can lead to very different results. • Using synchronized methods only allows one thread to have access at once. • The synchronized method is exclusive to the particular object from which it is being called. If multiple objects have synchronized methods, one thread can access the synchronized method from each of those objects at any one time. • You can also create a synchronized block, and anything within that block can only be accessed by one thread at a time.

  18. deadlocks • A deadlock is a situation in which two or more competing actions are each waiting for the other to finish. • “In computer science terms, a deadly embrace is a deadlock involving exactly two competing actions.” • There are four ways of handling deadlocks as discussed by subsequent slides…

  19. Ignore deadlocks In this approach, the program simply ignores deadlocks altogether on the assumption they will never happen, or happen rarely. This is an application of the Ostrich algorithm. This approach is used when the amount of time between occurrences of deadlocks is large, and the repercussions of the deadlocks are within tolerable limits.

  20. Deadlock detection • With deadlock detection, deadlocks are allowed to occur. • When a deadlock is detected, one of the following can be applied: • Process Termination: in this method, one or more (or all) of the processes involved in the deadlock may be terminated. When all of the processes involved in the deadlock are terminated, the cost of data/computation loss is high, however deadlock elimination is guaranteed. Alternatively, processes can be terminated one at a time until the deadlock situation is resolved. In this approach, the length of time to resolve the deadlock can increase dramatically, as the system will need to check for deadlock after every killed process. • Resource Preemption: in this method, resources that are allocated to some processes may be preempted and allocated to other processes until the deadlock is broken. Factors that need to be taken into consideration when using this method, include which resources to take, and which processes to take from.

  21. The Coffman conditions • The Coffman conditions are the set of four conditions which lead to a deadlock situation. http://people.cs.umass.edu/~mcorner/courses/691J/papers/TS/coffman_deadlocks/coffman_deadlocks.pdf • Mutual exclusion: tasks claim exclusive control of the resources they require. • Hold and wait: tasks hold resources already allocated to them while waiting for additional resources. • No preemption: resources cannot be forcibly removed from the tasks holding them until the resources are used to completion. • Circular wait: a circular chain exists, such that each task holds one or more resources that are being requested by the next task in the chain. • Deadlock prevention works by preventing one of the listed conditions.

  22. Dining Philosophers The Dining Philosophers problem involves 5 philosophers with a plate of spaghetti and a fork placed between each pair of adjacent philosophers. A philosopher alternates between thinking and eating, however, a philosopher may not eat unless he has both forks to his sides. After a philosopher has acquired both forks, he eats for a period of time, sets down the right fork, sets down the left fork and then continues thinking. After this he repeats the process. The problem … Comes when each philosopher is waiting on the one to his right to release the 2nd fork. At this point each philosopher will be waiting in a state of deadlock indefinitely.

  23. DEADLOCK prevention • Mutual exclusion: preventing the mutual exclusion condition means that no process can have exclusive access to a resource. One of the ways this is solved is by having spooled (Simultaneous Peripheral Operations On-Line) resources. A common example of this is the print spooler. Printers are usually only capable of printing one thing at a time, and it usually takes a few seconds or longer. Spooling allows a process to drop off its print job and then continue processing. • Hold and wait: preventing a hold and wait condition can be achieved by forcing a process to request all of the resources it will need at once, rather than sequentially. This is often difficult or impossible to achieve. Most often, it is simply too inefficient. • No preemption: it can be difficult to prevent this condition as resources must be allocated to a process for at least some amount of time. Also, whenever a resource is preempted this usually requires a rollback of the process, meaning increased overhead. Algorithms which prevent this condition (allow preemption) are said to be non-blocking (lock free and wait free) algorithms or optimistic concurrency control algorithms. • Circular wait: this condition can be prevented by disabling interrupts on a process as it enters a critical section or by developing a resource hierarchy solution in which all tasks request resources in a predetermined order.

  24. Deadlock avoidance • Deadlock can be avoided if information, such as what resources a process will request while it’s active, is available prior to the allocation of such resources. • With this information, the system will be able to make a determination as to whether or not it will enter an unsafe state. (An unsafe state is simply a state in which deadlock can occur). • One algorithm used for deadlock avoidance is the Banker’s algorithm, in which the resource usage limit is know ahead of time. This is often impossible, and thus deadlock avoidance cannot be achieved. • Two other options are: wait/die and wound/wait. The actions in each of these is determined by process age. In wait/die, if an older process requires a resource held by a newer process, the older process waits. If it’s reversed, the newer process simply dies. In wound/wait, if an older process requires a resource held by a newer process the newer process dies. If it’s reversed, the newer process waits.

  25. livelock • Livelock is a special situation, similar to deadlock, in which processes constantly change with respect to one another but don’t get any actual work done. • An example of this could come from the dining philosophers. If there were only two philosophers, both would immediately pick up the fork to their left. Seeing their partner requires the fork that they have, they simultaneously give their fork to the other so that he can eat. This results in each philosopher constantly giving away his fork to the other and receiving the fork that the other had.

  26. Locks, mutex and semaphores • In general, a lock is something that must be required before a thread can gain access to a locked resource. • Most locks are advisory, although, some locks are mandatory locks and will throw and exception if there is an attempt to access the locked resource without first acquiring the lock. • Mutex (or mutual exclusion) locks are singular locks for a shared resource. Only one access is allowed to said resource at any given time, and is usually used when a thread must complete a critical part of its execution. • The idea is that a process may want to complete some critical portion of its work without fear of an interrupt (such as when writing to a file). This increases reliability of the code and stability of the resource.

  27. semaphores • A semaphore is essentially a limited number accesses to an available resource. • Think of it like a bouncer at a club. The bouncer can only let so many people in at a time and if there are more people a line is formed. • As people leave the club, those waiting in line are allowed in. • The Semaphore class takes as a parameter the number of total entries allowed, and an optional parameter for fairness. • Semaphores have two methods called acquire and release. These decrement and increment the counter of entries respectively. If acquire is called and there are no more entries available, the caller must wait until a person leaves the club, or go home. The fairness setting determines whether or not the first person to call acquire is guaranteed to be the first one let in when a person leaves.

  28. Barriers • A barrier is a stopping point for a group of threads. • Once a thread in the group hits the barrier, it must wait until all other threads have reached that point in order to continue execution. • The modified podRacing example shows this by causing all of the racers to pause at the halfway point, allowing the rest of the racers to catch up. Once all of the racers have gotten there, the race can continue.

  29. And by special request! I wonder what it could be ….

  30. Asynchronous Methods!! Because everyone needs some a-synchronicity in their life.

  31. Asynchronous Methods • Something somethingsomethingdark side. • Something somethingsomething asynchronous complete. • The end.

  32. Asynchronous methods • An asynchronous method improves application performance by removing the bottlenecks of synchronous methods. • When a synchronized method is called, the caller must wait for the synchronized method to start, execute and finish before it can continue with its work. • When an asynchronous method is called, the calling application need not wait for the final result and can continue other work while the asynchronous method does its thing. • Functionally, this would be similar to creating and launching another thread, only with all the benefits of using methods. • http://msdn.microsoft.com/en-us/library/vstudio/hh191443.aspx

  33. conclusions Got Deadlock? • Concurrency is an essential part of software engineering in current times. • Great care must be taken to ensure that: • Deadlocks do not occur, or are handled appropriately. • Resources are well managed and used efficiently. • Thread use is appropriate, and not conjured up wildly. • As always, the most important part of software engineering is to goof off and have fun. I mean … be efficient and stay on task!

  34. Disclaimer • I do not own any of the pictures and stuff in these slides. • LucasFilm and Star Wars are, unfortunately, owned by Disney. • I really hope they don’t give Mickey Mouse a cameo in Episode 7. • Seriously, that would be wrong. • I also don’t own the other pictures I got off of google.com. They belong to their respective owners.

More Related