1 / 8

Memory Protection

Memory Protection. In order to prevent one process from reading/writing another process’ memory, we must ensure that a process cannot change its virtual-to-physical translations Typically, this is done by: Having two processor modes: user & kernel Only the OS runs in kernel mode

angus
Télécharger la présentation

Memory Protection

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Memory Protection • In order to prevent one process from reading/writing another process’ memory, we must ensure that a process cannot change its virtual-to-physical translations • Typically, this is done by: • Having two processor modes: user & kernel • Only the OS runs in kernel mode • Only allowing kernel mode to write to the virtual memory state: • The page table • The page table base pointer • The TLB

  2. Sharing Memory • Paged virtual memory enables sharing at the granularity of a page, by allowing two page tables to point to the same physical addresses • For example, if you run two copies of a program, the OS will share the code pages between the programs Program A Physical Memory Program B Virtual Address Virtual Address Disk

  3. Virtual Memory and the MIPS pipelined datapath • In the datapath, two addresses need to be translated (virtual  physical) • instruction address (PC value) • IF stage: fetch PC instruction, compute PC+4 (next instruction) • ID stage: branch-target computation • data address (for loads/stores) • EX or MEM stage • Two possible solutions (discussed in section): • PC-register stores only PCvirtual • PC-register stores both PCvirtual and PCphysical • The first solution is simpler but slower since TLB needed before I-cache • The second solution is more complex, and requires TLB in ID stage as well

  4. CPU A little static RAM (cache) Lots of dynamic RAM A third solution: caches use virtual addresses • If caches can be accessed using virtual addresses, the datapath can be greatly simplified • What are some of the pitfalls of this approach, and how can they be handled? • Since the cache is shared, two programs using the same virtual addresses use the same cache space • cache tag must store process-ID • If a process finishes and its process-ID is later reused by a new process, the new process may get a false cache-hit • when process finishes, the appropriate cache entries must be invalidated

  5. Summary • Virtual memory is great: • It means that we don’t have to manage our own memory • It allows different programs to use the same memory • It provides protect between different processes • It allows controlled sharing between processes (albeit somewhat inflexibly) • The key technique is indirection: • Yet another classic CS trick you’ve seen in this class • Many problems can be solved with indirection • Caching made a few cameo appearances, too: • Virtual memory enables using physical memory as a cache for disk • We used caching (in the form of the Translation Lookaside Buffer) to make Virtual Memory’s indirection fast

  6. Embedded Processors • MIPS processors for mobile and embedded consumer applications • e.g., multimedia-based devices, home entertainment systems, etc. • cannot afford too much silicon (space, cost) • For most processors, instructions are not normally issued every cycle. A lot of time is wasted on cycles executing with no data available because the CPU is fixing a cache miss. • Traditional approaches (general CPUs): • better branch prediction, out-of-order execution, … • bigger, more associative caches • Neither of these is feasible for embedded processors

  7. Transistor usage over time (general CPU) Source: UPCRC Distinguished Lecture Series, Yale Patt

  8. MIPS Virtual Processor • Maintains multiple contexts in hardware • when there is a missed cycle, processor switches to another context • two virtual processing elements corresponding to the OS-visible state, each containing five thread contexts corresponding to the user state • To the OS/application, each VPE/TC looks like a fully featured CPU • ISA extensions to allow programmers access to these capabilities • Example: fork $rd, $rs, $rt • Start new TC with PC = $rs, new TC’s $rd = forking TC’s $rt • “If the MIPS VPE/TC can capture a good portion of those wasted cycles you have literally doubled the performance of the processor with no additional cores, pipelines or higher clock rates, and at considerably lower power consumption.” • “A key unanswered question is: how easy will it be to use MIPS VPE architecture? It is likely to be harder than MIPS would like us to believe.”

More Related