1 / 58

Understanding the Performance and Management Implications of FICON/FCP Protocol Intermix Mode (PIM)

This paper discusses the benefits and challenges of implementing FICON/FCP Protocol Intermix Mode (PIM) in a storage network, exploring recent developments and best practice recommendations.

elacoste
Télécharger la présentation

Understanding the Performance and Management Implications of FICON/FCP Protocol Intermix Mode (PIM)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Understanding the Performance and Management Implications of FICON/FCP Protocol Intermix Mode (PIM) CMG Canada 14 April 2009 Dr. Steve Guendert Brocade Communications Stephen.guendert@brocade.com

  2. Abstract • FICON/FCP protocol intermix mode (PIM) in a common storage network has been supported by IBM since early 2003 yet has not seen widespread adoption among end users for a variety of reasons. Recent developments such as the new IBM System z10, Node Port Identifier Virtualization (NPIV), virtual fabrics, and advances in storage networking management make PIM a more compelling technological strategy for the end user to enable better utilization of capacity and operational cost savings. Dr. Steve Guendert Understanding Protocol Intermix (PIM)

  3. Introduction-Agenda • PIM Basic concepts • Why intermix? (why not?) • Integrating System z and open systems servers • Integrating System z using z/OS and zLinux • FCP channels on the mainframe • NPIV • Virtual Fabrics • Best Practice Recommendations • Conclusion Dr. Steve Guendert Understanding Protocol Intermix (PIM)

  4. Key References • S. Guendert. Understanding the Performance and Management Implications of FICON/FCP Protocol Intermix Mode (PIM). Proceedings of the 2008 CMG. Dec 2008. • I. Adlung, G. Banzhaf et al. “FCP For the IBM eServer zSeries Systems: Access To Distributed Storage”. IBM Journal of Research and Development. 46 No.4/5, 487-502 (2002). • American National Standards Institute. “Information Technology-Fibre Channel Framing and Signaling (FC-FS).” ANSI INCITS 373-2003. • G. Bahnzhaf, R. Friedrich, et al. “Host Based Access Control for zSeries FCP Channels”, z/Journal 3 No.4, 99-103 (2005) • S. Guendert. “Next Generation Directors, DASD Arrays, & Multi-Service, Multi-Protocol Storage Networks”. z/Journal, February 2005, 26-29. • S. Guendert. “The IBM System z9, FICON/FCP Intermix, and Node Port ID Virtualization (NPIV). NASPA Technical Support. July 2006, 13-16. • G. Schulz. Resilient Storage Networks. pp78-83. Elsevier Digital Press. Burlington, MA 2004. • S. Kipp, H. Johnson, and S. Guendert. “Consolidation Drives Virtualization in Storage Networks”. z/Journal. December 2006, 40-44. • S. Kipp. H. Johnson, and S. Guendert. “New Virtualization Techniques in Storage Networking: Fibre Channel Improves Utilization and Scalability.” z/Journal, February 2007, 40-46 • J. Srikrishnan, S. Amann, et al. “Sharing FCP Adapters Through Virtualization.” IBM Journal of Research and Development. 51 No. ½, 103-117 (2007). Dr. Steve Guendert Understanding Protocol Intermix (PIM)

  5. PIM Basic Concepts

  6. What is FICON/FCP Intermix? • Historically it has meant intermix at the connectivity layer-i.e. on the same directors, switches, and fibre cable infrastructure. • It really has not referred to intermix of mainframe and open systems disk storage on the same array. This has now changed. • Subject unto itself, beyond scope of this paper/presentation Dr. Steve Guendert Understanding Protocol Intermix (PIM)

  7. The Fibre Channel Protocol and architecture Protocol Mapping Layer Upper Level Protocol (ULP) FC-SB-2/3 FICON and FCP FC-4 FCP/FICON/HIPPI/Multi-media, etc. Login Server, Name Server, Alias Server FC-3 Common Services FC-2 Data packaging, Class of service, Port Login / logout, Flow control... Framing Protocol / Flow Control Serial Interface (one bit after another) Frame Transfer (up to 2048 byte payload) 8b/10b data encode / decode Transmission Protocol - Encode / Decode FC-1 Interface/Media – The Physical Characteristics Cables, Connectors, Transmitters & Receivers... FC-0 • Fibre Channel Architecture • An integrated set of rules (FC-0 thru FC-4) for serial data transfer between computers, devices and peripherals developed by INCITS (ANSI). • FCP and FICON are just a part of the upper layer (FC-4) protocol • They are compatible with existing lower layers in the protocol stack • FC-SB-2 standard used for single byte FICON,FC-SB-3 standard used for FICON Cascading Dr. Steve Guendert Understanding Protocol Intermix (PIM)

  8. Why Intermix? (and why not?)

  9. Why? Reason 1 • ESCON is still out there, but for how long? • May 2008 zJournal survey • 42% of FORTUNE 1000 still have an install base (mainframe storage) that is ESCON attached. • Dec 31, 2004/2009 • Extensive experience with open systems fibre channel SANs. • Use for testing Dr. Steve Guendert Understanding Protocol Intermix (PIM)

  10. Why? –Reason 2 • Non-production environments that are specialized • Require more flexibility • Require resource sharing • Examples: • Quality Assurance • Test/development • Dedicated DR facility Dr. Steve Guendert Understanding Protocol Intermix (PIM)

  11. Why? Reason 3 • What if we could merge both networks? • The Hardware is the same • We can use: • Common Infrastructure • Common Equipment • Common Management • Common IT Staff • Lower Total Cost of Ownership Dr. Steve Guendert Understanding Protocol Intermix (PIM)

  12. IBM is encouraging System z10 customers to consolidate open systems servers onto their z10 via zLinux. IBM Project Big Green Z10 and zLinux with NPIV make a compelling case for PIM Why?-Reason 4: System z10 Dr. Steve Guendert Understanding Protocol Intermix (PIM)

  13. Why not PIM? The two party system • Politics enters into everything • Mainframe vs. open systems • Clash of cultures • Large companies-tend to keep everything separate • Others-may have open systems storage and mainframe storage under same management Dr. Steve Guendert Understanding Protocol Intermix (PIM)

  14. Open Systems and Mainframe Culture Clash • Open Systems –EASE of DEPLOYMENT is king: • Its history has been built on how fast / can I reboot! • Plans are made for regular scheduled outages • The Systems Administrator typically is not very concerned with how frames are routed • Solution has to work but predictability of performance is not a mantra Dr. Steve Guendert Understanding Protocol Intermix (PIM)

  15. Open Systems and Mainframe Culture Clash • Mainframe – In this world PREDICTABILITY is king: • NEVER want to suffer an unscheduled outage • MINIMIZE or eliminate scheduled outages • The Systems Programmer, will control EVERYTHING! • Including frame routing • Wants predictability and stability when a workload is moved from one set of resources to another – and to measure what’s currently going on • Probably won’t make much use of other FC layers to route frames, anytime soon, because of fear of losing predictability (RMF, SMF, etc) • Needs to be able to influence ‘Network Connectivity’ so ISL usage is a big concern to these professionals Dr. Steve Guendert Understanding Protocol Intermix (PIM)

  16. Examples of end user implementations of PIM • Small FICON and open systems environments using a common storage network. • z/OS servers accessing remote FICON storage via FICON cascading. • Linux on zSeries running z/OS to access local storage. • Hardware based remote DASD mirroring between sites using FCP as transport. • Open systems servers accessing storage on a FICON director using FCP • Linux on the zSeries using FCP to access storage. Dr. Steve Guendert Understanding Protocol Intermix (PIM)

  17. Integrating System z hosts and open systems servers

  18. Considerations for Mixing FICON AND FCP • Because both FICON and Open Systems Fiber Channel (FCP) are FC4 protocols, the differences are not relevant until the user wants to control the scope of the switching through zoning or connectivity control. • For example, Name Server zoning used by FCP devices provides fabric-wide connection control, while PDCM (Prohibit Dynamic Connectivity Mask) connectivity control used by FICON devices provides switch-wide control. Dr. Steve Guendert Understanding Protocol Intermix (PIM)

  19. Mainframe-Definition Oriented • Definition-oriented, address centric, host assigned • Planning is everything • Change control • If all the elements of the link have not been defined in IOCP, the connection simply does not exist. Dr. Steve Guendert Understanding Protocol Intermix (PIM)

  20. Open Systems-Discovery Oriented • Discovery-oriented, fabric assigned, name-centric • Use of Fibre Channel name server to determine device communication • No pre-definition (IOCP) needed for open operating systems • OS “walks through” addresses and looks for devices • Use of zoning, and different levels of binding for security • Fabric binding • Switch binding • Port binding Dr. Steve Guendert Understanding Protocol Intermix (PIM)

  21. Mixing FICON AND FCP: 4 Factors to Consider • Switch management: determine how the switch is managed. • Management limitations: determine the limitations and interactions of the management techniques used for each protocol type. • Address difference: The next step is to understand the implications of port addressing in FICON versus port numbering in FCP. FICON, like ESCON, abstracts the concept of the port by creating an object known as the port address. This concept is foreign to FCP. • Zoning. Consider whether to keep FICON in one zone and FCP in another zone. Dr. Steve Guendert Understanding Protocol Intermix (PIM)

  22. Mixing FICON AND FCP: Factors to Consider (continued) • Once these four steps are completed, the user is ready to create an intermix environment based on the SAN requirements. • The key decisions are: • Determining the access needs for the fabric. • Determining the scope of FICON support required.  • Determining what devices require an intermix of FICON and FCP. Dr. Steve Guendert Understanding Protocol Intermix (PIM)

  23. PDCM Zoning and PDCM considerations • The FICON Prohibit Dynamic Connectivity Mask (PDCM) controls whether or not communication between a pair of ports in the switch is prohibited or allowed. If there are any differences in restrictions set up with Zoning and PDCM, the most restrictive rules are automatically applied. Dr. Steve Guendert Understanding Protocol Intermix (PIM)

  24. PDCM • The FICON Prohibit Dynamic Connectivity Mask (PDCM) controls whether or not communication between a pair of ports in the switch is prohibited or allowed • Block versus Prohibit • Blocking causes the firmware to send a continuous “offline” sequence to the port • Useful to report the link as inactive after varying a device off on the mainframe • Prohibit causes the firmware to “prevent” connectivity between the ports • Useful to force FICON traffic over specific ISL’s Dr. Steve Guendert Understanding Protocol Intermix (PIM)

  25. PDCM 2. Choose ports to block or prohibit and then activate to save the changes Dr. Steve Guendert Understanding Protocol Intermix (PIM)

  26. FCP channels on the mainframe

  27. FICON and FCP Mode • A FICON channel in Fibre Channel Protocol mode (which is CHPID type FCP) can access FCP devices: • From a FICON channel in FCP mode through a single Fibre Channel switch or multiple switches to a SCSI device • The FCP support enables z/VM, z/VSE, and Linux on System z to access industry-standard SCSI devices. For disk applications, these FCP storage devices use Fixed Block (512-byte) sectors instead of Extended Count Key Data (ECKD) format. • FICON Express4, FICON Express2, and FICON Express channels in FCP mode provide full fabric attachment of SCSI devices to the operating system images, using the Fibre Channel Protocol, and provide point-to-point attachment of SCSI devices. Dr. Steve Guendert Understanding Protocol Intermix (PIM)

  28. FICON and FCP Mode (Continued) • The FCP channel full fabric support enables switches and directors to be supported between the System z server and SCSI device, which means many “hops” through a storage area network (SAN). • FICON channels in FCP mode use the Queued Direct Input/Output (QDIO) architecture for communication with the operating system. • HCD/IOCP is used to define the FCP channel type and QDIO data devices. There is no definition requirement for the Fibre Channel storage controllers and devices in IOCP, nor the Fibre Channel devices such as switches and directors because of QDIO. Dr. Steve Guendert Understanding Protocol Intermix (PIM)

  29. Integrating System z using z/OS, zLinux and Node Port ID Virtualization (NPIV)

  30. Linux on System z • Linux on System z is ten years old in 2009 • Virtualization is a key component to address IT’s requirement to control costs yet meet business needs with flexible systems • System z Integrated Facility for Linux (IFL) leverages existing assets and is dedicated to running Linux workloads while containing software costs • Linux on System z allows you to leverage your highly available, reliable and scalable infrastructure along with all of the powerful mainframe capabilities • Your Linux administrators now simply administer Linux on a “Big Server” Dr. Steve Guendert Understanding Protocol Intermix (PIM)

  31. zSeries/System z server virtualization • zSeries/System z support of zLinux • Mainframe expanded to address open system applications • Linux promoted as alternative to Unix • Mainframe operating system virtualization benefits • Availability, serviceability, scalability, flexibility • Initial zSeries limits • FCP requests are serialized by the operating system • FCP header does not provide image address • FICON SB2 header provides additional addressing • Channel ports are underutilized • Resulting cost/performance benefit is not competitive Dr. Steve Guendert Understanding Protocol Intermix (PIM)

  32. The road to NPIV- • LUN Access Control • Gives end user ability to define individual access rights to a particular device or storage controller • Can significantly reduce the number of FCP channels needed to provide controlled access to data on FCP SCSI devices. • Not the ideal solution: did not solve the 1:1 ratio • Alternatives looked at by IBM included: • FC process associators • Hunt groups/multicasting • Emulating sub fabrics • Finally settled on NPIV Dr. Steve Guendert Understanding Protocol Intermix (PIM)

  33. A Simplified SchematicLinux using FCP on a System z10 without NPIV Line Card Linux Partition Linux A A A Linux B One FCP Channel Per Linux B B Linux C C C D D Probably very little I/O bandwidth utilization Linux D No parallelism so it is very difficult to drive I/O for lots of Linux images B48000 or DCX Chassis 200 - 800 MBps per port System z10 Dr. Steve Guendert Understanding Protocol Intermix (PIM)

  34. Server Consolidation-NPIV • N_Port Identifier Virtualization (NPIV) • Mainframe world: unique to System z9 and System z10 • zLinux on System z9/10 in an LPAR • Guest of z/VM v 4.4, 5.1 and later • N_Port becomes virtualized • Supports multiple images behind a single N_Port • N_Port requests more than one FCID • FLOGI provides first address • FDISC provides additional addresses • All FCID’s associated with one physical port Fabric Login Fibre Channel Address OS Applications Fabric Discover Fibre Channel Address Dr. Steve Guendert Understanding Protocol Intermix (PIM)

  35. System z N-port ID Virtualization FICON Express2 and Express4 adapters now support NPIV Domain Port @ Virtual Addr. CU Link @ 1 byte 1 byte 1 byte FC-FS 24 bit fabric addressing – Destination ID (D_ID) Domain Area AL (Port) Identifies the Switch Number Up to 239 Switch Numbers Identifies the Switch Port Up to 240 ports per domain AL_PA, assigned during LIP, Low AL_PA, high Priority 1 byte 1 byte 1 byte Switch 00 - FF Dr. Steve Guendert Understanding Protocol Intermix (PIM)

  36. A Simplified SchematicLinux using FCP on a System z10 with NPIV Line Card Linux Partition Linux A One FCP Channel for many Linux images A D C B A Linux B B Much better I/O bandwidth utilization per path Linux C C D Linux D B48000 or DCX Chassis Lots of Parallelism System z10 Dr. Steve Guendert Understanding Protocol Intermix (PIM)

  37. NPIV summary • NPIV allows multiple zLinux “servers” to share a single fibre channel port • Maximizes asset utilization • Open systems server ROT is 10 MB/second • 4 Gbps link should support 40 zLinux servers from a bandwidth perspective • NPIV is an industry standard Dr. Steve Guendert Understanding Protocol Intermix (PIM)

  38. Virtual Fabrics An example scenario to explain the technology

  39. Data Center Fabric ConsolidationMotivating Factors • Unorganized SAN Growth • Organic growth of SANs is creating large physical SAN infrastructures • The need to merge data centers produces larger SANs • Acquisition of data centers forces SAN expansion • Controlling the growth motivates virtualization • Simplified management • Local administration • Access to centralized services Dr. Steve Guendert Understanding Protocol Intermix (PIM)

  40. Data Center NetworkIndependent Cascaded Fabrics Site A Site B OS Servers OS Servers Fabric #1 11 21 OS Storage OS Storage Fabric #2 DASD 12 22 DASD 13 23 System Z Backup Fabric System Z Tape Tape Dr. Steve Guendert Understanding Protocol Intermix (PIM)

  41. Before ConsolidationPort Count and Utilization Rate Dr. Steve Guendert Understanding Protocol Intermix (PIM)

  42. Consolidated ServersMerge Open System Servers onto zSeries Site A Site B Mainframe Applications Mainframe Applications Fabric #1 11 21 OS Applications OS Applications System Z Fabric #2 System Z 12 22 OS Storage OS Storage DASD 13 23 DASD Backup Fabric Tape Tape Dr. Steve Guendert Understanding Protocol Intermix (PIM)

  43. Server ConsolidationN_Port Identifier Virtualization System Z10 Fabric #1 Mainframe Applications OS Applications Fabric #2 OS Applications Backup Fabric Dr. Steve Guendert Understanding Protocol Intermix (PIM)

  44. Fabric ConsolidationTechnology • Virtual Fabric Configuration • Logical Fabrics and Logical Switches • Utilizes frame tagging to create virtual fabrics and virtual links CUP CUP CUP CUP Dr. Steve Guendert Understanding Protocol Intermix (PIM)

  45. Virtualized NetworkNext-Generation Logical Fabrics System Z10 Logical Fabric #1 Mainframe Applications Virtual Switch 1 OS Applications Virtual Switch 2 Logical Fabric #2 OS Applications Logical Fabric Backup Virtual Switch Backup Dr. Steve Guendert Understanding Protocol Intermix (PIM)

  46. Consolidated NetworkLogical Fabrics Site B Site A Mainframe Applications Mainframe Applications Logical Fabric 1 11 21 OS Applications OS Applications Logical Fabric 2 System Z 12 22 System Z OS Storage OS Storage Logical Fabric Backup 13 23 DASD DASD Tape Tape Dr. Steve Guendert Understanding Protocol Intermix (PIM)

  47. Link ConsolidationTechnology • Virtual Fabric Identifier (VFID) • Fabric is virtualized • Supports multiple common domains on the same switch • Additional addressing identifies virtual fabric • Supports shared fabric traffic on single link Dr. Steve Guendert Understanding Protocol Intermix (PIM)

  48. Expanded Fibre Channel Addressing Start of Frame Virtual Fabric Tagging Header With 12-bit VF_ID 4,096 Virtual Fabric Identifiers Encapsulation Header - Identical to FC Header 4,096 Fabric Identifiers FC Header FC-IFR Encapsulation Header Inter-Fabric Routing Header With 12-bit Source F_ID and Destination F_ID Fibre Channel Header With 3 Byte D_ID Data Field End of Frame Dr. Steve Guendert Understanding Protocol Intermix (PIM)

  49. Virtual Fabric Tagging Site A in Denver Site B in Englewood Virtual Switch 1 Virtual Switch 2 Virtual Switch Backup Virtual Switch B2 Virtual Switch 4 Virtual Switch 3 Tagging Logic Tagging Logic Physical Ports Physical Ports Long Distance ISLs with Virtual Fabric Tagging Dr. Steve Guendert Understanding Protocol Intermix (PIM)

  50. Consolidated Data Center NetworkTagging ISLs Site A Site B 11 Logical Fabric #1 21 Logical Fabric #2 22 12 Tagging Logic (XISL) Tagging Logic (XISL) Logical Fabric Tagging ISL 23 13 Logical Fabric Backup Dr. Steve Guendert Understanding Protocol Intermix (PIM)

More Related