1 / 60

Indiana University Global NOC Service Desk

Indiana University Global NOC Service Desk. Visit of NICT/JGN2 and WIDE to IU 7 January 2005 Doug Pearson. Global NOC work is sponsored in part by: National Science Foundation, Internet2, National LambdaRail, State of Indiana, Indiana GigaPoP Participants, and Indiana University.

Sophia
Télécharger la présentation

Indiana University Global NOC Service Desk

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Indiana UniversityGlobal NOC Service Desk Visit of NICT/JGN2 and WIDE to IU 7 January 2005 Doug Pearson

  2. Global NOC work is sponsored in part by: • National Science Foundation, • Internet2, • National LambdaRail, • State of Indiana, • Indiana GigaPoP Participants, and • Indiana University

  3. Service Desk Doug Pearson Steve Peck • Engineering • Dave Jent

  4. Service Desk • The Global NOC Service Desk • Performs front-line network management • Provides customer interaction and service • Provides service interface to engineering • Staffed by network technicians • Staff are network knowledgeable; they understand network topology, cause & effect, and management processes • Operates 24 x 7 x 365 • Fail-safe operations • Single point of contact for network management • Leveraged resources for cost-effective operations and management

  5. Global NOC Managed Networks • Indiana University and Indiana Initiatives • Indianapolis (IUPUI) and Bloomington (IUB) campus networks • IU state network (six regional campuses plus extension centers) • Indiana GigaPoP • I-Light • IP Grid • External networks • Internet2 Abilene • TransPAC • StarLight • MAN LAN • AMPATH • National LambdaRail

  6. Global NOC Initiatives • External initiatives • iGOC • REN-ISAC

  7. Service Desk Functions • Communications and coordination • Problem management • Change coordination • Notifications • Service provider interaction • Security • Monitoring of availability and performance • Resource allocation • Documentation • Reporting

  8. Service Desk Staff • Staff of 16 • Senior Manager (Doug Pearson) • Operations Manager (Steve Peck) • 2 Shift Supervisors (Camille Davis-Alfs,Stacy Bengochea) • 1 Lead Technical Analyst (John Wilson) • 1 Grid Operations Specialist (Rob Quick) • 10 Technicians (mostly full-time employees + a few hourly) • Technician schedule: • work 12 hour shifts, 7:00 to 7:00 • rotate 3 and 4 work shifts per week

  9. Global NOC Activity

  10. IUB and IUPUI Networks • ~ 37k connections at IUB, ~26k connections at IUPUI • ~ 200 buildings at IUB, ~75 buildings at IUPUI • ~ 300-400 VLANs at each campus • > 30Gbps of routing capacity @ IUB, > 20Gbps at IUPUI • > 6 Gbps of total capacity to external networks (eg Internet, Abilene, etc)

  11. IUB and IUPUI Core Networks

  12. IUB and IUPUI Core Networks • Inside buildings... • HP switches • layer-2 forwarding only • tree-like design (ie no redundancy) • Buildings to Core... • 1 or 2 GIGE connections into the Core Switches (eg CS1, CS2). • 802.1Q trunks and can have several VLANs on them. • 1 VLAN per building or 1 VLAN per floor (in general) • Core Switching and Routing • Separate equipment for switching and routing • Core Switches are Cisco 6500's doing only layer-2 forwarding. • Core Switches interconnected via 10 Gigabit Ethernet • 10 Gigabit links between Core Switches and Core Routers

  13. IUB and IUPUI Core Networks • Different Core Routers for different services • CRs provide IPv4 Unicast routing • ASRs provide IPv6 Unicast and IPv4 Multicast • MRs route management traffic • Backbone Switches (eg BS1, BS2) provide interconnect between campuses • Cisco 6500's - layer 2 forwarding only • 10 Gigabit primary with 1 Gigabit backup

  14. IUB and IUPUI Core Networks • Border Routers (ie iBR and bBR) provide all external connectivity • iBR primary for IPv4 Unicast with bBR as backup • bBR primary for IPv6 Unicast and IPv4 Multicast with iBR as backup • 4 Gbps of capacity to the Gigapop • 2 Gbps of capacity to local ISPs

  15. IU State Network • 2 main campuses (IUB and IUPUI) • 6 regional campuses • 3 extension centers • Dark fiber between IUB and IUPUI • DS3s to regional campuses

  16. IU State Network

  17. I-Light is a optical fiber network connecting IU Bloomington, IUPUI Indianapolis, and Purdue University West Lafayette to each other. I-Light connects all three campuses to the national Internet infrastructures, including Internet2. • IU has access to 8 strands of TrueWave (supports DWDM) and 24 strands of singlemode.

  18. IP-grid is an NSF-funded collaboration of IU and Purdue universities that connects university research resources to the Teragrid. TeraGrid is a U.S. national project to build the world's largest, most comprehensive grid computing infrastructure for open scientific research. • IP-grid employs I-Light resources, Cisco 15454s and Juniper T320, connecting to the ETF T640 at 710 Lake Shore in Chicago. • 10 Gbps from IUB to IUPUI, 10 Gbps Purdue to IUPUI, and 20 Gbps from IUPUI to 111 N. Canal in Chicago and 20 Gbps to 710 Lake Shore and the ETF T640.

  19. Internet2 Abilene is a high-performance backbone network supporting advanced networking and applications for research and education. • 230 university and corporate research participants connected via 44 connectors,114 sponsored participants (organizations not eligible for Internet2 membership), and 37 SEGP (primarily state educational networks) • OC192c 10-Gigabit-per-second backbone • 11 core nodes • Connectors via OC3, OC12, OC48, GigE, and 10GigE • 7 connected Internet Exchange Points • 29 peer networks

  20. TransPAC was one component of the HPIIS program. It provided high performance international Internet service connection between the Asia Pacific Advanced Network (APAN) and US and global advanced networks for the purpose of international collaborations in research and education. • As of January 1 2005 TransPAC has been superceded by TransPAC2, a part of the NSF-funded International Research Network Connections program.

  21. Important aspects and goals • Support for production science • Measurement • Security • Authentication and authorization infrastructure for dynamic light path provisioning, traveling scientist, and other AuthNZ linkages • Research and migration to hybrid optical/packet network infrastructure

  22. StarLight is a 1GigE and 10GigE switch/router facility for high-performance access to participating networks, and a true optical switching facility for wavelengths. Since summer 2001, StarLight management and engineering has been working with the international academic and commercial communities to create a proving ground in support of grid-intensive e-Science applications, network performance measurement and analysis, and computing and networking technology evaluations.

  23. MAN LAN • MAN LAN is a high performance exchange point in New York City to facilitate peering among U.S. & international research and education networks. The exchange point, built within the fiber meet-me room in the carrier hotel at 32 Avenue of the Americas is a collaborative effort of Internet2, NYSERNet, and Indiana University. • Connected networks include: Abilene NYSERnet CAnet SINET GEANT SURFNET HEAnet Qatar

  24. MAN LAN

  25. AmericasPATH (AMPATH) provides interconnection of the research and education networks in South and Central America, and the Caribbean to U.S. and global research and education networks. AMPATH currently connects REACCIUN (Venezuela) and Retina (Argentina). • AMPATH is a collaborative effort of Florida International University and Global Crossing, with support from the National Science Foundation through the Strategic Technologies for the Internet Program.

  26. NLR is a consortium of US research universities and private-sector technology companies dedicated to building a national-scale infrastructure for research and experimentation in networking technologies and applications. NLR is the largest higher education owned and managed optical networking and research facility in the world. • NLR service offerings include: • L1 lambas • L2 gigE (in development) • L3 (in development)

  27. Initiatives: iGOC • Mission • The International Virtual Data Grids Laboratory (iVDGL) Grid Operations Center (iGOC) is an NSF-funded project to deploy, maintain, and operate the iVDGL Grid3 as a NOC manages a network, providing a single point of operations for configuration support, monitoring of status and usage (current and historical), problem management, support for users, developers and systems administrators, provision of grid services, security incident response, and maintenance of the Grid3 information repository. • Staffing • 2 FTE at Indiana University, plus effort from University of Chicago (monitoring development), University Florida at Gainsville (Grid3catalog, web site, site verify script, etc.), and leveraged resources of the 24x7 Global NOC.

  28. Initiatives: iGOC

  29. Supported by Indiana University and through relationship with EDUCAUSE and Internet2, the REN-ISAC: • is an integral part of higher education’s strategy to improve network security through information collection, analysis, dissemination, early warning, and response; specifically designed to support the unique environment and needs of organizations connected to served higher education and research networks, and • supports efforts to protect the national cyber infrastructure by participating in the formal U.S. ISAC structure. • 24 x 7 Watch Desk operated at the Global NOC • ren-isac@iu.edu, +1 (317) 278-6630 • http://www.ren-isac.net

  30. REN-ISAC has core complimentary relationships with: • EDUCAUSE • Internet2 • EDUCAUSE and Internet2 Security Task Force • IU Global NOC and Abilene network engineering • IU Advanced Network Management Lab • IU Information Technology Security Office • US Department of Homeland Security & US-CERT • IT-ISAC • ISAC Council • SALSA • Internet2 / CANARIE / GEANT2

  31. Tools and information sources: • Network instrumentation • Abilene NetFlow data • Abilene router ACL counters • Darknet • Global NOC operational monitoring systems • Daily security status calls with ISACs and US-CERT • Vetted/closed network security collaborations • Backbone and member security and network engineers • Vendors, e.g. monthly ISAC calls with vendors • Security mailing lists, e.g. EDUCAUSE, FIRST, etc. • Members – related to incidents on local networks

  32. Global NOC Tools • Nagios • Alertmon • Global NOC Database • Footprints • Tickmon • Animated Traffic/Usage Maps • MRTG, RRDTool, Cricket • BSCW • Router Proxy • Gridcat • KnowledgeBase • Arbor Networks Peakflow DOS

  33. Tools: Nagios • Nagios is used for network and system state monitoring. • The Global NOC has developed numerous Nagios plug-ins for monitoring its managed networks, including a TL1 plug-in to monitor NLR 15808 and 15454 equipment. check_cpu SNMP check of CPU load on a Juniper or Cisco device check_bgp SNMP check of BGP session status check_isis SNMP check of ISIS adjacency check_ospf SNMP check of OSPF status from the OSPF neighbor table check_pim SNMP check of OSPF status of a PIM neighbor check_msdp SNMP check of the status of a MSDP peering session check_intf SNMP check of the status of an interface (indexed by IP) check_intf_by_ifname SNMP check of the status of an interface (indexed by ifName)

  34. Tools: Nagios

  35. Tools: AlertMon • The AlertMon front-end to monitoring systems consolidates alerts from network monitoring systems, provides an alert management interface, and links to ticketing and documentation for alerted elements. AlertMon receives input from Nagios and is being extended with interfaces to other monitoring systems.

  36. Tools: Global NOC Database • The Global NOC Database containscomprehensive information regarding network elements, operations and problem management information for elements, network topology, and contacts. • Drill down linkages to the Database information is being developed for front-end tools such as AlertMon. • The Database is currently implemented for the NLR network and will be extended to all Global NOC-managed networks, and will become the primary information tool for the NOC.

  37. Tools: Global NOC Database

  38. Tools: Footprints • The ticketing system – used for problems, changes, installation, maintenance, and other tracking.

  39. Tools: TickMon • TickMon is front-end to the ticketing system that provides a consolidated view of open and active tickets.

  40. Tools: Animated Traffic/Usage Maps • Presents a geographic and/or topologic-based view of network use and error data. Data collected via SNAPP SNMP collector.

  41. Tools: MRTG, RRDTool, Cricket • Used to gather and graph time series data, e.g. link utilization and errors.

  42. Tools: BSCW • Basic Support for Cooperative Work (BSCW) is used for documentation repository.

  43. Tools: Router Proxy • The Router Proxy acts as an interface by whicha user can get information from 'show' commands on a router using a web interface. The user enters a query to the router through a HTML form. This query is sent to the router, and the results are presented to the user in their web browser.

More Related