1 / 117

Instructor - Allan Ackerman VCA-DCV & VCP5-DCV

CIT 198 Week#13 Module 11 from the eBook High Availability and Fault Tolerance Sybex Chapter#11 Monitor a vSphere Implementation. Instructor - Allan Ackerman VCA-DCV & VCP5-DCV. Click the graphic for assessment. This week our objectives will be.

marcy
Télécharger la présentation

Instructor - Allan Ackerman VCA-DCV & VCP5-DCV

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CIT 198 Week#13Module 11 from the eBook High Availability and Fault ToleranceSybexChapter#11Monitor a vSphere Implementation Instructor - Allan Ackerman VCA-DCV & VCP5-DCV Click the graphic for assessment

  2. This week our objectives will be • Complete labs 19, 20, 21 from the NDG/Cisco • Note – NDG Lab#22 is optional and we will not be doing it. • Complete labs 27 & 28 on the in-class virtual lab. • In chapter 11 of the Sybex book we will look at monitoring a vSphere implementation and managing vSphere alarms. • In chapter 11 of the eBook we will be covering High Availability and Fault Tolerance. • Next week’s quiz will be evenly distributed from the Sybex book chapter 11, our eBook Chapter 11 and tonight’s labs and PowerPoint. • Note – as promised I will be putting some actual retired VMware VCP-DCV exam questions on the last two quizzes. • Next Week’s quiz will be our last quiz. There will not be a quiz on May 6. Week#13 vSphere 5.1 & 5.5

  3. Almost at the end • This week and next we will have 4 labs dealing with the vMA. So there will be exactly 30 in-class labs. • There will be no new labs the week of May 6. • The last week, May 6, we will have a short lecture on the update manager, but we will not install it nor do an in-class lab on it. Update manager is covered in chapter 13 of the eBook. The rest of the class period will be to finish any of the thirty unfinished in-class labs. • Note – we have already covered chapter 14 in the eBook – we did that the first couple of weeks of class. Chapter 14 covers installing vCenter on a Windows server and it also covers installing the VCSA. We have already done both tasks. • Final exam will be on May 13 – 100 questions from labs, eBook, and the Sybex book. Week#13 vSphere 5.1 & 5.5

  4. Important classroom info for the week of April 22 • The 9th quiz average is back up to 72%. Remember all quizzes are open book. Have your eBook functional at school as well as your Sybex book. Look things up. (Remember the Gilmore book can be on 4 devices at one time. It is also extremely easy to move from device to device. So in reality you really can have your eBook on as many devices as you want but only 4 active.) • Finish our NDG labs 19, 20, 21 • Next complete all in-class labs through lab#28. Week#13 vSphere 5.1 & 5.5

  5. Our NDG lab#19 Our goals in this lab will be to: • Create a virtual machine alarm that monitors for a condition. • Create a virtual machine alarm that monitors for an event. • Trigger virtual machine alarms and acknowledge them. • Disable virtual machine alarms. Note: there is a typo on page 5 of NDG lab#19 the password for the VCSA is vmware not vmware123 (That is the password for the two ESXi hosts) Week#13 vSphere 5.1 & 5.5

  6. Our NDG lab#19 Open up the cpubusy.vbs script and notice some minor errors in the code. Reports 3 million sine's, but only does a little over 2 million. The sine calculation is always on the same argument. Many optimizing compliers would pick this up and do the calculation only one time not a couple of million -- good idea to change the argument of the function each time through the loop. Anyway, last week’s lab already has these edits on this script. As you go through the lab make sure you notice that you can monitor for events or specific conditions or state. Week#13 vSphere 5.1 & 5.5

  7. Our NDG lab#19 Check out the various settings you can do on the trigger page and on the action page make sure you know the 4 conditions that can trigger the alarm • green to yellow • yellow to red • red to yellow • yellow to green It would be a good idea to practice this lab again in our in-class virtual lab environment. Practice makes perfect. Week#13 vSphere 5.1 & 5.5

  8. Our NDG lab#20 Our goals in this lab will be to: • Create a cluster enabled for VMware HA. • Add the hosts to a lab cluster. • Test VMware HA functionality. This is a really easy lab to do – VMware makes cluster setup a breeze. Make sure you know and understand all your admission control policies for HA. Our next NDG lab deals with DRS – make sure you understand the difference between these two technologies – they make a great team but they are not really related. Week#13 vSphere 5.1 & 5.5

  9. Our NDG lab#20 Note: The lab says to shutdown esxi1 which will restarted all of our machines on esxi2 via HA. While this will work, your ESXi1 is now powered down, and you have no way to do lab 21. I would select a reboot of ESXi1 not a shutdown – which should test HA and get your ESXi1 host running again for lab 21 Week#13 vSphere 5.1 & 5.5

  10. Our NDG lab#21 This is our last NDG lab Our goals in this lab will be to: 1. Create a DRS cluster. 2. Verify proper DRS cluster functionality 3. Create, test, and disable affinity rules. 4. Create, test, and disable anti-affinity rules. This also in another easy lab to setup and use. Most people just set their cluster to fully automated and their cluster does initial placement of VMs and load balances automatically. This thing just works. Week#13 vSphere 5.1 & 5.5

  11. Our NDG lab#21 Make sure you know all your vm-vm affinity rules Make sure you know all your vm-vmanti-affinity rules Make sure you know all your vm-hosts affinity rules Make sure you know all your vm-hosts anti-affinity rules Make sure you understand when and what situations you would apply the above rules. Make sure you know your cluster configuration maximums – here they are again – 32 hosts per cluster, 512 VMs per hosts, 3000 VMs per cluster. Week#13 vSphere 5.1 & 5.5

  12. Our in-class lab#27 • This week and next we will have 4 labs dealing with the vMA. • So there will be exactly 30 in-class labs. • The vSphere Management Assistant (vMA) is a virtual appliance that includes prepackaged software such as a Linux distribution, the vSphere command‐line interface, and the vSphere SDK for Perl. Basically it is the missing service console for ESXi. But it’s more than that too. • The vMA allows administrators to run scripts or agents that interact with ESX/ESXi and vCenter Server systems without having to explicitly authenticate each time. vMA can also collect ESX/ESXi and vCenter Server logs and store the information for analysis. • Lab#27 will go through the whole installation of this VM and get the VM functional. Let’s get started. Week#13 vSphere 5.1 & 5.5

  13. Our in-class lab#27 • We will be creating a new vm called, vMA.vita.local. We will configure it to have a static IP of 192.168.246.18. • We will need to add this machine to the forward lookup zone of our DNS server. • We will need to start the esx shell and shh service as they are disabled by default. • We will install the vMA with a fixed address of 192.168.246.18/24. We will give it a host name of vma.vita.local, the DNS server address of 192.168.246.19, and the default gateway address of 192.168.246.2 • After it is installed and configured we will login to the new VM via putty. Week#13 vSphere 5.1 & 5.5

  14. Our in-class lab#28 • This lab is really just a continuation of the previous lab. • We will be getting the vMA a little more user friendly by getting our servers, usernames, and passwords all set for fast pass authentication. • Next week after these two labs are complete we will be able to use our vMA and the esxtoputils for some serious benchmarking and troubleshooting. Week#13 vSphere 5.1 & 5.5

  15. High Availability and Fault Tolerance Module 11 eBook Week#13 vSphere 5.1 & 5.5

  16. You Are Here HA & FT Week#13 vSphere 5.1 & 5.5

  17. Importance • Most organizations rely on computer-based services like email, databases, and Web-based applications. The failure of any of these services can mean lost productivity and revenue. • Configuring highly available, computer-based services is extremely important for an organization to remain competitive in contemporary business environments. Week#13 vSphere 5.1 & 5.5

  18. Module Lessons • Lesson 1: Introduction to vSphere High Availability • Lesson 2: Configuring vSphere HA • Lesson 3: vSphere HA Architecture • Lesson 4: Introduction to Fault Tolerance • Lesson 5: Introduction to Replication Week#13 vSphere 5.1 & 5.5

  19. Lesson 1 Introduction to vSphere High Availability Week#13 vSphere 5.1 & 5.5

  20. Learner Objectives • After this lesson, you should be able to do the following: • Describe the options that you can configure to ensure high availability in a VMware vSphere® environment. • Discuss the response of VMware vSphere® High Availability (vSphere HA) when a VMware vSphere® ESXi™ host, a virtual machine, or an application fails. Week#13 vSphere 5.1 & 5.5

  21. VMware Offers Protection at Every Level Multi-layer protection Week#13 vSphere 5.1 & 5.5

  22. vCenter Server Availability: Recommendations • Make VMware® vCenter Server™ and the components it relies on highly available. • vCenter Server relies on: • vCenter Server database: • Cluster the database. See the documentation for the database. • Active Directory structure: • Set up with multiple redundant servers. • Methods for making vCenter Server available: • Use vSphere High Availability to protect the vCenter Server virtual machine. • Use VMware® vCenter™ Server Heartbeat™. Week#13 vSphere 5.1 & 5.5

  23. High Availability • A highly available system is one that is continuously operational for a desirably long length of time. What level of virtual machine availability is important to you? Week#13 vSphere 5.1 & 5.5

  24. vSphere HA 2 – 32 hosts in a HA cluster Week#13 vSphere 5.1 & 5.5

  25. vSphere HA Failure Scenarios • vSphere HA protects against: • ESXi host failure • Virtual machine/guest operating system failure • Application failure • Other scenarios are discussed in lesson 3: • Management network failures: • Network partition • Network isolation Week#13 vSphere 5.1 & 5.5

  26. vSphere HA Failure Scenario: Host Host Failure Week#13 vSphere 5.1 & 5.5

  27. vSphere HA Failure Scenario: Guest Operating System VMware Tools needs to be installed Week#13 vSphere 5.1 & 5.5

  28. vSphere HA Failure Scenario: Application You need a 3rd party app to get this to work Week#13 vSphere 5.1 & 5.5

  29. Review of Learner Objectives • You should be able to do the following: • Describe the options that you can configure to ensure high availability in a vSphere environment. • Discuss the response of vSphere HA when an ESXi host, a virtual machine, or an application fails. Week#13 vSphere 5.1 & 5.5

  30. Lesson 2 Configuring vSphere HA Week#13 vSphere 5.1 & 5.5

  31. Learner Objectives • After this lesson, you should be able to configure a vSphere HA cluster. Week#13 vSphere 5.1 & 5.5

  32. What Is a Cluster? • A cluster is a collection of ESXi hosts and associated virtual machines with VMware vSphere High Availability and DRS enabled. • A DRS cluster is managed by VMware vCenter Server and has these resource management capabilities: • Initial placement • Load balancing • Power management cluster Week#13 vSphere 5.1 & 5.5

  33. Enabling vSphere HA Enable vSphere HA by creating a cluster or modifying a vSphere Distributed Resource Scheduler (DRS) cluster. Week#13 vSphere 5.1 & 5.5

  34. Configuring vSphere HA Settings HA settings Week#13 vSphere 5.1 & 5.5

  35. Admission Control Policy Choices Three admission control policy settings Week#13 vSphere 5.1 & 5.5

  36. Configuring Virtual Machine Options VM restart priority determines relative order in which virtual machines are restarted after a host failure. Configure options at the cluster level or per virtual machine. Host Isolation response determines what happens to virtual machines when a host loses the management network but continues running. Week#13 vSphere 5.1 & 5.5

  37. Configuring Virtual Machine Monitoring Reset a virtual machine if its VMware Tools heartbeat or VMware Tools application heartbeats are not received. Remember this can be done at the VM level Determine how quickly failures are detected. Set monitoring sensitivity for individual virtual machines. Week#13 vSphere 5.1 & 5.5

  38. Importance of Redundant Heartbeat Networks • In a vSphere HA cluster, heartbeats are: • Sent between the master and the slave hosts • Used to determine if a master or slave host has failed • Sent over a heartbeat network • The heartbeat network is: • Implemented using a VMkernel port marked for management • Redundant heartbeat networks: • Allow for the reliable detection of failures Week#13 vSphere 5.1 & 5.5

  39. Redundancy Using NIC Teaming • You can use NIC teaming to create a redundant heartbeat network on ESXi hosts. • Both port groups must be VMkernel ports. NIC teaming on an ESXi host Week#13 vSphere 5.1 & 5.5

  40. Redundancy Using Additional Networks • You can also create redundancy by configuring more heartbeat networks: • On ESXi hosts, add one or more VMkernel networks marked for management traffic. Week#13 vSphere 5.1 & 5.5

  41. Network Configuration and Maintenance • Before changing the networking configuration on the ESXi hosts (adding port groups, removing vSwitches): • Deselect Enable Host Monitoring. • Place the host inmaintenance mode. • These steps prevent unwanted attempts to fail over virtual machines. Week#13 vSphere 5.1 & 5.5

  42. Cluster Resource Allocation Tab • How much CPU and memory resources is the cluster using now? • How much reserved capacity remains? Week#13 vSphere 5.1 & 5.5

  43. Monitoring Cluster Status cluster’s Summary tab • The vSphere HA Cluster Status window displays details about host operational status, virtual machine protection, and heartbeat datastores • The Configuration Issues window displays the current vSphere HA operational status, including the specific status and errors for each master and slave host in the cluster. Week#13 vSphere 5.1 & 5.5

  44. Setting vSphere HA Advanced Parameters Advanced Options Set advanced parameters by editing vSphere HA cluster settings. Week#13 vSphere 5.1 & 5.5

  45. Advanced Options to Control Slot Size • Set default (minimum) slot size: • das.vmCpuMinMHz • das.vmMemoryMinMB • Set maximum slot size: • das.slotCpuInMHz • das.slotMemInMB Week#13 vSphere 5.1 & 5.5

  46. Review of Learner Objectives • You should be able to configure a vSphere HA cluster. Week#13 vSphere 5.1 & 5.5

  47. Lesson 3 vSphere HA Architecture Week#13 vSphere 5.1 & 5.5

  48. Learner Objectives • After this lesson, you should be able to do the following: • Describe heartbeat mechanisms used by vSphere HA. • Identify and discuss additional failure scenarios. Week#13 vSphere 5.1 & 5.5

  49. vSphere HA Architecture: Agent Communication datastore datastore datastore Agent communication ESXi host (slave) ESXi host (master) ESXi host (slave) FDM FDM FDM vpxa vpxa hostd vpxa hostd hostd vpxd = Management network vCenter Server Week#13 vSphere 5.1 & 5.5

  50. vSphere HA Architecture: Network Heartbeats NAS/NFS VMFS VMFS Network Heartbeats virtual machine E virtual machine C virtual machine A virtual machine F virtual machine D virtual machine B master host slave host slave host Management network 1 Management network 2 vCenter Server Week#13 vSphere 5.1 & 5.5

More Related