1 / 13

CSC369 – tutorial 6: Midterm review

CSC369 – tutorial 6: Midterm review. TA: Trevor Brown Slides: http://www.cs.utoronto.ca/~tabrown/csc369/week6.ppt. Paging memory. Address Translation. This is a test. Paging Memory.

vail
Télécharger la présentation

CSC369 – tutorial 6: Midterm review

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CSC369 – tutorial 6: Midterm review TA: Trevor Brown Slides: http://www.cs.utoronto.ca/~tabrown/csc369/week6.ppt

  2. Paging memory

  3. Address Translation • This is a test. Paging Memory bits for page number = 22 bit virtual address - 16 bits offset (for 64k page size) = 6, so number of pages is 2^6 or 64. vaddr 0x03BEEF is vpn 0x03 with offset 0xBEEF. Page table for virtual page number (vpn) 3 is frame 0xF0, so phys addr is 0xF0BEEF

  4. Address Translation (continued) Paging Memory phys addr 0x2CF070 is physical frame 0x2C with offset 0xF070. Searching the page table, we find frame 0x2C is allocated for virtual page number 1, so vaddr is 0x01F070 if physical address are 32 bits, and we still need 16 bits for offset, then there are 16 bits for physical page number, giving 2^(16) = 64k physical page frames extra time to look up entries in page tables (extra memory accesses) or extra memory space to store page tables for translation

  5. Allocation and fragmentation Paging Memory Answer:

  6. Synchronization

  7. Unprotected counter increments Synchronization

  8. Data races • Are there any data races in this code? • If so: how can they occur, and how can we fix them? Synchronization

  9. Data races (continued) Synchronization Answer: Suppose throwing 128 CPUs at this code doesn’t improve performance much. Why would that be the case?

  10. Data races (continued) Synchronization Answer: Basically: The running time of show_money is tiny, compared thread creation time, and we can only create one thread at a time. (show_money is only a few instructions) Lesson: Concurrent jobs have to be substantial enough to warrant thread creation.

  11. CPU/thread Scheduling

  12. Multi-level Feedback Queue (MLFQ) CPU/Thread Scheduling • Consider a 2-level MLFQ: • Level 0 (L0): round-robin with quantum=2 • Level 1 (L1): first-come-first-served • New processes go to back of L0. • Processes finished I/O burst go to back of L0. • The workload: • Three processes (P0, P1, P6) spawn at times 0, 1, 6. • Each process does: • CPU burst for 5 • I/O burst for 3 • CPU burst for 1 • The problem: for the first 20 time units, for each process, write one of the following: [blank], new, [has] CPU, [is] preempt[ed], [is on queue] L0 / L1 / IO, exit.

  13. Multi-level Feedback Queue (MLFQ) CPU/Thread Scheduling

More Related