1 / 25

Introducing AutoCache 2.0 December 2013

Introducing AutoCache 2.0 December 2013. Company Profile. Team Rory Bolt, CEO - NetApp, EMC, Avamar, Quantum Clay Mayers, Chief Scientist - Kofax, EMC Rich Pappas, VP Sales/Bus-Dev – DDN, Storwize, Emulex, Sierra Logic Vision I/O intelligence in the hypervisor is a universal need

chas
Télécharger la présentation

Introducing AutoCache 2.0 December 2013

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introducing AutoCache 2.0 December 2013

  2. Company Profile • Team • Rory Bolt, CEO - NetApp, EMC, Avamar, Quantum • Clay Mayers, Chief Scientist - Kofax, EMC • Rich Pappas, VP Sales/Bus-Dev – DDN, Storwize, Emulex, Sierra Logic • Vision • I/O intelligence in the hypervisor is a universal need • Near term value is in making use of flash in virtualized servers • Remove I/O bottlenecks to increase VM density, efficiency, and performance • Must have no impact to IT operations; no risk to deploy • A modest amount of flash in the right place makes a big difference • Product • AutoCache™ hypervisor-based caching software for virtualized servers

  3. Solution: AutoCache VMware ESXi • I/O caching software that plugs in to ESXi in seconds • Inspects all I/O • Uses a PCIe Flash Card or SSD to store hot I/O • Read cache with write thru and write around • Transparent to VM’s • No Guest OS Agents • Transparent to storage infrastructure + + CPU Utilization

  4. AutoCache Results VMware ESXi • Up to 2-3X VM density improvement • Business Critical Apps accelerated • Transparent to ESXi value add like vMotion, DRS, etc. • Converts a modest flash investment into huge value + + CPU Utilization

  5. Simple to deploy • Buy Flash device, download PD software • Single ‘vib’ to install AutoCache • Install Flash-based device • Turn off server to install PCIe card, power on • -or- partition SSD • Global cache relieves I/O bottleneck in minutes • All VM’s accelerated regardless of OS, without use of agents • Reporting engine displays the results over time + Proximal Data turned on

  6. AutoCache in vCenter

  7. Uniquely Designed forCloud Service Providers • Broad support • Any backend datastore • Effectively any flash device, plus Hyper-V soon • Adaptive caching optimizes over time • Takes “shared” environment into consideration • Latencies and cache access affect other guests • Easy retrofit • No re-boot required • Pre-warm to maintain performance SLA on vMotion • Role Based Administration

  8. PD vMotion Innovation VMware ESXi VMware ESXi 1. AutoCache detects vMotion request + + Source Host Target host Shared storage

  9. PD vMotion Innovation VMware ESXi VMware ESXi X + + 2. Pre-warm VM metadata sent to target host to fill cache in parallel from shared storage Source Host Target host Shared storage Key Benefit: Minimized time to accelerate moved VM on target host

  10. PD vMotion Innovation VMware ESXi VMware ESXi X + + 2. Pre-warm VM metadata sent to target host to fill cache in parallel from shared storage Source Host Target host Shared storage Key Benefits: Eliminates the chance of cache coherency issues, and frees up source host cache resources 3. Upon vMotion action, atomically and instantly invalidates VM metadata on source host

  11. Role Based Administration VMware ESXi Customer B Customer A • Creates specific access rights for AutoCache vCenter plug in • Enables customer to modify: • Host level cache settings • VM cache settings • AutoCache retains statistics for a month CSP Infrastructure + + CPU Utilization

  12. RBA in practice • CSP creates vCenter account for customer • With RBA, now can also grant AutoCache rights for customer accounts that allow the customer to control caching for their VM’s • Enables varying degree of rights for the customer • One user at the customer might see all VMs • Another might see a subset of VMs • Yet another might see some VMs, but only have rights to certain aspects • Say, could turn on/off cache, but not impact caching on a device • Usage statistics are available for the last month, and may be exported for billing purposes

  13. Pricing and Availability • AutoCache 2.0 is available now from resellers • CMT, Sanity Solutions, Champion, CDW, AllSystems, BuyOnline, Pact Informatique, Commtech, etc. • Direct reps support channel in target markets in US • OEM Partnerships coming in Sept • Support for • ESXi 4.1, 5.0, 5.1, 5.5 (when available), Hyper-V in 2013 • PCIe cards from LSI, Micron, Intel, and server vendors • SSD from Micron, Intel, and server vendors • AutoCache Suggested Retail Price • Prices start at $1000 per host for cache sizes under 500GB

  14. SummaryTheProximal Data Difference • Innovative I/O Caching Solution • Specifically Designed for Virtualized Servers & FLASH • Dramatically improved VM Density and Performance • Fully Integrated into VMware Utilities and Features • Transparent to IT Operations • Simple to Deploy • Low Risk • Cost Effective

  15. The simplest, most cost-effective use of Enterprise FlashThank You

  16. Outline • Brief Proximal Data Overview • Introduction to FLASH • Introduction to Caching • Considerations for Caching with FLASH in a Hypervisor • Conclusions

  17. Considerations… • Most caching algorithms developed for RAM caches • No consideration for device asymmetry • Placing data in read cache that is never read again has negative effects on performance and device lifespan. • Hypervisors have very dynamic I/O patterns • vMotion affects I/O load as well as coherency issues • Adaptive algorithms are very beneficial • Must consider “shared” environment • Latencies and cache access affect other guests • Quotas/allocations may have unexpected side effects • Hypervisors are I/O blenders • The individual I/O patterns of guests are aggregated; devices see a blended average • Write-Around provides best performance/wear trade-off

  18. Write-Around Cache:Cost-Benefit analysis

  19. Complications of Write-Back Caching • Writes from VM’s fill the cache • Cache wear increased • Cache ultimately flushes to disk • Cache withstands write bursts • Cache over runs when disk flushes can’t keep up • If you are truly write bound, a cache will not help • Write-Back cache handles write bursts and benchmarks well but is not a panacea

  20. Complications of Write Back caching(continued) VMware ESXi VMware ESXi 2. Write is acknowledged by mirrored host 1. Write I/O mirrored on destination Ack + + (New, dedicated I/O channel for write back cache sync) Source Host In either case, network latency limits performance Mirrored host Shared storage with a performance tier Existing HA Storage infrastructure

  21. Disk Coherency… • Cache flushes MUST preserve write ordering to preserve disk coherency • Hardware copy must flush caches • Hardware snapshots do not reflect current system state without a cache flush • Consistency groups must now take into account the write back cache state • How is backup affected?

  22. Hypervisor Write BackCache: Cost-Benefit analysis

  23. Outline • Brief Proximal Data Overview • Introduction to FLASH • Introduction to Caching • Considerations for Caching with FLASH in a Hypervisor • Conclusions

  24. Evaluating caching • Results are entirely workload dependent • Benchmarks are good for characterizing devices • It is VERY hard to simulate production with benchmarks • Run your real workloads for meaningful results • Run your real storage configuration for meaningful results • Steady state is different from initialization • Large caches can take days to fill • Beware caching claims of 100s or 1000s times improvement • It is possible, just not probable

  25. FLASH Caching in Perspective • Flash will be pervasive in the Enterprise • Ten years in the making, but deployment is just beginning now • Choose the right amount in the right location • Modest flash capacity in host as read cache – the best price/performance and lowest risk/impact • More flash capacity in host as write back cache can help for specific workloads – but at substantial cost/complexity/operational impact • Large scale, centralized write back flash cache in arrays that leverage existing HA infrastructure and operations – highest cost – highest performance - medium complexity – low impact to IT

More Related