300 likes | 398 Vues
Explore strategies for managing traffic, prioritizing queues, and achieving quality networking in California's K-20 education system. Highlighting QoS technologies, traffic management tools, and the importance of network infrastructure.
E N D
Quality of Service inCalifornia K-20 Networking Dave Reese A Gathering of State Networks April 30, 2001
Quietly on the Sidelines • What traffic is most important? • Video (of course) • Voice (is this really coming?) • Research, Business, Admissions transactions? (depends on who decides) • Can’t just create one queue, everyone will demand special treatment • How many queues are needed (practical)? • How to prioritize multiple queues? • Will there really be a National QoS (and what will be the cost)?
What are we waiting for? • Bandwidth guarantees - like ATM CBR • Stable router software (does this exist?) • Reservations, limits/controls on usage • Method to decide who gets to use • Who enforces/patrols usage? • New planning/forecasting tools for network design
What California is doing now • Building shared Statewide “Intranet” to serve research, education, and business applications for K-20 • Keeping intra-state bandwidth ahead of demand • Using ATM to guarantee quality for video conferences/distance education • Bringing critical applications to the Intranet and off of the Internet
How is this working? • Only buying time - we want to move from ATM to IP • Bottleneck is between campus and backbone, backbone and Internet • Pilot project for “eContent” management to push multimedia servers closer to the user
OneNet Member Utilization • Over 1,600 Connections • As of October 2000 • 100% • Colleges, Universities and Career Technology Centers • Court Systems • 80% • Public Schools (K-12) • 1,000+ Additional Sites
Member Circuits (November 2000) DepartmentCircuits Higher Education 90 K-12 489 Career Technology Centers 65 Army National Guard 52 Courts 47 Hospitals (Gov’t/Private) 43 Law Enforcement 18 Libraries 107 Municipalities 28 Non-Profits 28 State Agencies 505
Some Services DEMANDING We Address QoS • Video Conferencing • H.323 • MPEG • Video Streaming • P2P • Napster • Gnutella • All the rest… • FTP
It All Adds Up Quickly • Examples • We now have over 800 H.323 endpoints registered as distance learning classrooms • Every higher education institution is wiring their dorms or building new dorms to be wired. • Local expertise in many of our members’ networks regarding traffic management is somewhat limited, new hip applications can quickly congest links.
Identifying The Causes • SNMP • Falls short in classification • Sniffers • Deployment is costly/difficult in the wider area • NetFlow • Can be utilized anywhere you have the capability to export flow information and have the time to wait for results
FlowScan • Identify applications • Identify networks • Identify protocols • http://net.doit.wisc.edu/~plonka/FlowScan/
Recent Specific Issue • Congestion at T1 level has been handled very well until recently with just WFQ. • Load-balanced per-packet overhead T1s at some hubsites are becoming congested • Distance-learning is our primary concern at these locations
Current Solution • Congested T1s moved to a PQ-WFQ scenario via ‘ip rtp priority’ • Not ideal, RTP traffic of any sort can starve out other activities. Fortunately not an issue in the troubled locations • Load-balanced T1s moved to per-destination PQ-WFQ scenario • Adding in queuing with per-packet balancing introduced greater out-of-sequence issues than many endpoints could handle • Max bandwidth available to a flow is now constrained to a single T1 • MOVE to greater bandwidth! • WRED used on DS3s and greater
Current Work • In the lab… • CAR • Start policing some applications to provide more assurance • NBAR • Anything we can do to help automate identification of what is going on in order to make classification simpler. • DiffServ, RSVP • Watching the Qbone and other I2 initiatives • MPLS • Traffic engineering not QoS but integral in many of the decisions we have to make
Issues • Quality of Service is Managed Unfairness • Many decisions to be made about what is rate limited, what is dropped, what gets prioritized • How do we check our trusts on pre-marked traffic?
QUALITY OF SERVICE & TRAFFIC MANAGEMENT Ben Colley http://www.more.net ben@more.net (800) 509-6673
The Problem • Like elsewhere, Napster started it all. • Expanded from a traffic limiting need to a traffic prioritization goal. • Excess recreational traffic was impacting production services • Growth in bandwidth requirements still exceeds available funding
The Project • What solutions exist: • At the backbone level?, • At the customer edge • via the router? • via other devices? • Goals • QoS - Ensure delivery of mission critical traffic • TM - Provide tools enabling local traffic management policies
QoS Direction • Implement “Differentiated Services” in core and edge routers • Mark “state level” applications as top priority in edge router • H.323 traffic to/from MOREnet MCU farm • Library Automation traffic to/from server farm • Other future applications, eg VoIP • MOREnet will not mark or remark any other traffic • Campuses can mark other traffic as desired at the source device, or elsewhere in their network • Implement for all H.323 sites this summer
QoS Alphabet Soup • At the network core (current best thinking) • Modified Deficit Round Robin (MDRR) • But still to determine queue mapping and forwarding strategy! • At the customer premise (current best thinking) • CAR and WFQ • CAR to ensure marking of “state level” application traffic • WFQ to forward appropriately • Technical meeting in May • Establish a common DiffServ Code Point (DSCP) strategy and, queue mapping and forwarding plan
Traffic Management • Mark packets for QoS (and unmark!) • Policy administration by: • physical network interface • server or workstation network address • application signature • Multiple network interfaces permit ability to: • isolate critical servers; load-balance servers, caches and/or intrusion detection devices • aggregate like kinds of traffic • (Future) API available for Time of Day policies
Traffic Management Research • Several products reviewed • Many good, focused products available • Recommendation for the TopLayer AppSwitch • Multiple interfaces support broader range of network design and architecture opportunities • Excellent H.323 “flow” management • Commitment to enhancing application recognition • Commitment to expanding usability
TM Implementation Strategy • Focus on sites that will experience congestion soon • Acquire & install in 1-2 lead sites and learn • Deploy to remaining sites throughout year • Vendor training and support • MOREnet supported product • Campus determines local policy and manages the platform • MOREnet only interested in “state level” services
Deployment Plan • Implement QoS prior to beginning of summer school for lead sites. • Test through summer to be ready for fall. • Implement 2nd round of QoS in August prior to fall semester. • Traffic Management deployment will move as needed on customer-by-customer basis starting this summer.
Lessons Learned • Still an emerging technology -- its not cookie cutter yet • And here we go with a state-wide deployment (again) • There will be bumps along the way, like: • Who gets to decide whose packets are important? • Build a “Community of Interest” • How one organization prioritizes traffic can have impact on another
Lessons Learned (continued) • We believe future funding increases will be linked to ‘good stewardship’ of current funding • Ask us in six months what the real lessons were!