1 / 26

Remus : High Availability via Asynchronous Virtual Machine Replication

Remus : High Availability via Asynchronous Virtual Machine Replication. Nour Stefan, SCPD. Introduction Related work Design Implementation Evaluation Future work Conclusions. Introduction. Highly available systems

velma
Télécharger la présentation

Remus : High Availability via Asynchronous Virtual Machine Replication

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Remus: High Availability via Asynchronous Virtual Machine Replication Nour Stefan, SCPD

  2. Introduction • Related work • Design • Implementation • Evaluation • Future work • Conclusions

  3. Introduction • Highly available systems • it requires that systems be constructed with redundant components that are capable of maintaining and switching to backups in case of failure • Commercial HA systems are expensive • use specialized hardware • customized software

  4. Introduction • Remus • High availability on commodity hardware • migrate running VMs between physical hosts • replicate snapshots of an entire running OS instance at very high frequencies (25ms) • External output is not released until the system state that produced it has been replicated • Virtualization • create a copy of a running machine, but it does not guarantee that the process will be efficient

  5. Introduction • Running two hosts in lock-step (syncronously) • reduces the throughput of memory to that of the network device • Solution : • 1 host executes speculatively and then checkpoint and replicate its state asynchronously • system state is not made externally visible until the checkpoint is committed • running the system tens of milliseconds in the past

  6. Introduction • Goals: • Generality • Transparency • The reality in many environments is that OS and application source may not even be available to modify • Seamless failure recovery • No externally visible state should ever be lost in the case of single-host failure

  7. Approach

  8. Approach • Speculative execution • buffer output1 until a more convenient time, performing computation speculatively ahead of synchronization points • Asynchronous replication • Buffering output at the primary server allows replication to be performed asynchronously • The primary host can resume execution at the moment its machine state has been captured, without waiting for acknowledgment from the remote end

  9. Related work • XEN support for live migration • Extended significantly to support frequent, remote checkpointing • Virtual machine logging and replay • ZAP • Virtualization layer within the linux kernel • SpecNFS • Using speculative execution in order to isolate I/O processing from computation

  10. Design • Remus achieves high availability by • propagating frequent checkpoints of an active VM to a backup physical host • on the backup, the VM image is resident in memory and may begin execution immediately if failure of the active system is detected • backup only periodically consistent with the primary => all network output must be buffered until state is synchronized on the backup • Virtual machine does not actually execute on the backup until a failure occurs => allowing it to concurrently protect VMs running on multiple physical hosts in an N – to – 1 style configuration

  11. Design

  12. Design – Failure model • Remus provides the following properties: • The fail-stop failure of any single host is tolerable • Both the primary and backup hosts fail concurrently => the protected system’s data will be left in a crash-consistent state • No output will be made externally visible until the associated system state has been committed to the replica • Remus does not aim to recover from software errors or nonfail-stop conditions

  13. Design –Pipelined checkpoints • Checkpointing a running virtual machine many times per second places extreme demands on the host system. • Remus addresses this by aggressively pipelining the checkpoint operation • epoch-based system in which execution of the active VM is bounded by brief pauses in execution in which changed state is atomically captured, and external output is released when that state has been propagated to the backup

  14. Design –Pipelined checkpoints • (1) Once per epoch, pause the running VM and copy any changed state into a buffer • (2) Buffered state is transmitted and stored in memory on the backup host • (3) Once the complete set of state has been received, the checkpoint is acknowledged to the primary • (4) buffered network output is released

  15. Design - Memory and CPU • Writable working sets • Shadow pages • Remus implements checkpointing as repeated executions of the final stage of live migration: each epoch, the guest is paused while changed memory and CPU state is copied to a buffer • The guest then resumes execution on the current host, rather than on the destination

  16. Design - Memory and CPU • Migration enhancements • every checkpoint is just the final stop-and-copy phase of migration • examination of Xen’s checkpoint code => the majority of the time spent while the guest is in the suspended state is lost to scheduling (xenstore daemon) • it reduces the number of inter-process requests required to suspend and resume the guest domain • it entirely removes xenstore from the suspend/resume process

  17. Design - Memory and CPU • Checkpoint support • resuming execution of a domain after it had been suspended • Xen previously did not allow “live checkpoints” and instead destroyed the VM after writing its state out • Asynchronous transmission • the migration process was modified to copy touched pages to a staging buffer rather than delivering them directly to the network while the domain is paused

  18. Design – Network buffering • TCP • Ccrucial that packets queued for transmission be held until the checkpointed state of the epoch in which they were generated is committed to the backup

  19. Design – Disk buffering • Remus must preserve crash consistency even if both hosts fail

  20. Design – Detecting failure • Simple failure detector: • a timeout of the backup responding to commit requests will result in the primary assuming that the backup has crashed and disabling protection • a timeout of new checkpoints being transmitted from the primary will result in the backup assuming that the primary has crashed and resuming execution from the most recent checkpoint

  21. Evaluation

  22. Evaluation

  23. Future work • Deadline scheduling • Page compression • Copy-on-write checkpoints • Hardware virtualization support • Cluster replication • Disaster recovery Log-structured datacenters

  24. Conclusions • Remus • novel system running on commodity hardware • uses virtualization to encapsulate a protected VM • performs frequent whole-system checkpoints to asynchronously replicate the state of a single speculatively executing virtual machine

  25. ?

More Related