Download
slide1 n.
Skip this Video
Loading SlideShow in 5 Seconds..
F5 Traffic Optimization PowerPoint Presentation
Download Presentation
F5 Traffic Optimization

F5 Traffic Optimization

713 Vues Download Presentation
Télécharger la présentation

F5 Traffic Optimization

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. F5TrafficOptimization Radovan GibalaField Systems Engineerr.gibala@f5.com+420 731 137 223 2007

  2. Evolution of the Data Center

  3. Datacenter Without F5 & ADN X X Cell phone NetApp X X X X PC - Home X MS SQL Server Web Server Web Server Web Server Web Server Web Server Web Server X X Laptop – coffee shot X Oracle App. Server App. Server App. Server App. Server PC - LAN X PC - WAN mySQL Server X Windows file storage Windows file storage EMC

  4. Datacenter With F5’s ADN Cell PC - Home Web Server Web Server Web Server Web Server Web Server Web Server Web Server Virtualization (LTM) Application Server Virtualization (LTM) File Storage Virtualization (ARX) Remote - WAN App. Server App. Server App. Server App. Server PC - LAN Windows file storage Windows file storage EMC NetApp WLAN

  5. Globalization:Success = Collaboration “Over the next 15 years markets will become even more global, functions within their organizations will atomize across geographies and partners, and competition will intensify from new corners of the world.” - Economist Intelligence Unit, Foresight 2020 Study

  6. Business Continuity HA Disaster Recovery Business Continuity HA Disaster Recovery User Experience & App Performance User Experience & App Performance App Security & Data Integrity App Security & Data Integrity Managing Scale & Consolidation Managing Scale & Consolidation Unified Security Enforcement & Access Control Unified Security Enforcement & Access Control

  7. Business Continuity HA Disaster Recovery User Experience & App Performance App Security & Data Integrity People Apps Data Managing Scale & Consolidation Storage Growth Unified Security Enforcement & Access Control

  8. Business Continuity HA Disaster Recovery User Experience & App Performance App Security & Data Integrity People Apps Data Managing Scale & Consolidation Storage Growth Unified Security Enforcement & Access Control

  9. Business Continuity HA Disaster Recovery • WAN Virtualization • File Virtualization • DC to DC Acceleration • Virtualized VPN Access User Experience & App Performance App Security & Data Integrity • AAA • Data Protection • Transaction Validation • Asymmetric & Symmetric Acceleration • Server Offload • Load Balancing People People Apps Apps Data Data • Virtualization • Migration • Tiering • Load Balancing • Virtualized App & Infrastructure • Server & App Offload • Load Balancing • Remote, WLAN & LAN Central Policy Enforcement • End-Point Security • Encryption • AAA Managing Scale & Consolidation Storage Growth Unified Security Enforcement & Access Control

  10. Business Continuity HA Disaster Recovery BIG-IP LTM • GTM • LC • WA FirePass • ARX • WJ Application Delivery Network • WAN Virtualization • File Virtualization • DC to DC Acceleration • Virtualized VPN Access User Experience & App Performance App Security & Data Integrity BIG-IP LTM • GTM • WA ARX • WJ BIG-IP LTM • ASM FirePass • AAA • Data Protection • Transaction Validation • Asymmetric & Symmetric Acceleration • Server Offload • Load Balancing People Apps Data • Virtualization • Migration • Tiering • Load Balancing • Virtualized App & Infrastructure • Server & App Offload • Load Balancing • Remote, WLAN & LAN Central Policy Enforcement • End-Point Security • Encryption • AAA Managing Scale & Consolidation Storage Growth ARX BIG-IP GTM BIG-IP LTM • GTM • LC • WA FirePass • ARX • WJ Unified Security Enforcement & Access Control FirePass BIG-IP LTM • GTM

  11. Acceleration Functional Groups • Tier 1 Acceleration – Network Offload 200% – 300% performance improvement • Tier 2 Acceleration – Server Offload 200% – 500% performance improvement • Tier 3 Acceleration – Application Offload 200% – 1000% performance improvement

  12. Acceleration Functional Areas and the Effect on Infrastructure 75% 60% • Server Offload • Compression • Dynamic Caching • Content Spooling • OneConnect • Rate Shaping • Connection limit Page Generation Time Page LoadTime Page Delivery Time Page Delivery Time Internet Or WAN 75% 60% Client Browser MyCSP ServerInfrastructure

  13. Acceleration Functional Areas and the Effect on Infrastructure Page Generation Time Page LoadTime Page Delivery Time Page Delivery Time Internet Or WAN 60% 40% Client Browser MyCSP ServerInfrastructure • Network Acceleration • Compression • Dynamic Caching • TCP Express • Server Offload • Compression • Dynamic Caching • Content Spooling • OneConnect • Rate Shaping • Connection limit

  14. Acceleration Functional Areas and the Effect on Infrastructure • Server Offload • Compression • Dynamic Caching • Content Spooling • OneConnect • Rate Shaping • Connection limit Page Generation Time Page LoadTime Page Delivery Time Page Delivery Time Internet Or WAN 35% 25% Client Browser MyCSP ServerInfrastructure • Network Acceleration • Compression • Dynamic Caching • TCP Express • Differential Compression • QoS • Security/authentication

  15. Acceleration Functional Areas and the Effect on Infrastructure • Server Offload • Compression • Dynamic Caching • Content Spooling • OneConnect • Rate Shaping • Connection limit • Application Acceleration • IBR (Dynamic Content Control) • Multi-Connect • Dynamic Linearization • Dynamic Caching • Dynamic Compression • SSL Acceleration Page Generation Time Page LoadTime Page Delivery Time Page Delivery Time Internet Or WAN 10% 10% Client Browser • Network Acceleration • Compression • Dynamic Caching • TCP Express • Differential Compression • QoS • Security/authentication

  16. Application How To Achieve the Requirements ? Multiple Point Solutions More Bandwidth Network Administrator Application Developer Add More Infrastructure? Hire an Army of Developers?

  17. The Result: A Growing Network Problem Applications Users Network Point Solutions DoS Protection Mobile Phone SFA Rate Shaping SSL Acceleration CRM ERP CRM PDA Server Load Balancer ERP Laptop ERP CRM SFA ContentAcceleration ApplicationFirewall Desktop SFA Connection Optimisation TrafficCompression Customised Application Co-location

  18. F5’s Integrated Solution Applications Users The F5 Solution Application Delivery Network CRMDatabaseSiebelBEALegacy.NETSAPPeopleSoftIBMERPSFACustom Mobile Phone PDA Laptop Desktop TMOS Co-location

  19. GUI-Based Application Profiles Repeatable Policies iRules Programmable Network Language Security Optimisation Delivery New Service News Website The Most Intelligent and Adaptable Solution Programmable Application Network Unified Application Infrastructure Services Targeted and Adaptable Functions Complete Visibility and Control of Application Flows Universal Inspection Engine (UIE) TM/OS Fast Application Proxy Client Side Server Side Compression TCP Offloading Load Balancing

  20. Architect for Virtualized Applictions and Resources; Leverage Network Services International Data Center Policy-based, centralized AND Management Application & server virtualization, SOA component support, application load-balancing, switching, filtering Applications Users Intelligent & policy-based DNS; support virtualization & SOA components Symmetric WAN optimization & application acceleration Services Universal client and system application & network VPN Services Bi-directional application-aware multi-homing & QoS Services Bi-directional application firewall services Asymmetric application acceleration Open SOAP/XML API & SDK IP Proxy O/S

  21. AVAILABLE • Comprehensive Load Balancing • Advanced Application Switching • Customized Health Monitoring • Intelligent Network Address Translation • Intelligent Port Mirroring • Universal Persistence • Response Error Handling • Session / Flow Switching • IPv6 Gateway • Advanced Routing SECURE • DoS and SYN Flood Protection • Network Address/Port Translation • Application Attack Filtering • Certificate Management • Resource Cloaking • Advanced Client Authentication • Firewall - Packet Filtering • Selective Content Encryption • Cookie Encryption • Content Protection • Protocol Sanitization • Application Security Module Network BIG-IP W W A A FAST • SSL Acceleration • Quality of Service • Connection Pooling • Intelligent Compression • L7 Rate Shaping • Content Spooling/Buffering • TCP Optimization • Content Transformation • Caching DatabaseSystem A Better Alternative: Virtualize and Unify Network Services and Offload the Application

  22. TCP Optimization

  23. TCP Express • Behaviors of a good TCP/IP implementation. • Proper congestion detection. • Good congestion recovery. • High bandwidth utilization. • Being too aggressive can cause individual connections to consume all of the network. • Not being aggressive enough will leave unused bandwidth especially during a low number of connections. • Always needs to adapt to changing congestion. • Increased windowing and buffering will often help compensate for latency and can also offload the application equipment more quickly. • Most important tuning you can do in TCP typically has to do with window sizes and retransmission logic (aka congestion control behavior). • On today’s networks, loss is almost always caused from congestion. • Most TCP stacks are not aggressive enough.

  24. F5’s TCP Congestion Control Algorithms • Reno Congestion Control • Original TCP fast recover algorithm based on BSD Reno. • Initially grows congestion window exponentially during the slow-start period. • After slow-start, increases CWND by 1MSS for each CWND acked (this is linear growth). • When loss or a recovery episode is detected, the CWND is cut in half. • New Reno modifications (this is currently the default mode) • Improves on the Reno behaviour. • When entering a recovery episode, implements a fast retransmit: • Each ACK less than the recovery threshold triggers a one-time resend of the data started by the ACK. • Results in more aggressively sending the missing data and exiting the recovery period. • Scalable TCP (added in 9.4) • Improves on the NewReno behaviour. • Upon loss, the CWND is reduced by only 1/8. • Once out of slow start, CWND increases by 1% of an MSS for each CWND ACK’d. • HighSpeed (F5's proprietary congestion control added in 9.4) • Similarly improves on the NewReno behaviour in combination with Scalable TCP. • Progressively switches from NewReno to Scalable TCP based on the size of the CWND. • Upon loss, the CWND is reduced by somewhere between ½ and 1/8. • CWND grows somewhere between 1% and 100% of an MSS for each CWND ACK’d.

  25. New Reno Scalable HighSpeed

  26. HTTP Optimization

  27. OneConnect ™ – Connection Pooling • Increase server capacity by 30% • Aggregates massive number of client requests into fewer server side connections • Transformations form HTTP 1.0 to 1.1 for Server Connection Consolidation • Maintains Intelligent load balancing to dedicated content servers Good Sources: http://tech.f5.com/home/bigip/solutions/traffic/sol1548.html http://www.f5.com/solutions/archives/whitepapers/httpbigip.html

  28. OneConnect™ Review • OneConnect™ causes each request to be individually load-balanced among members of the same pool and potentially uses pre-established connections from a server connection pool. • The client connection is detached after each server response has been received and the server-side connection is optionally saved for reuse in a connection pool. • The OneConnect™ source mask profile settings control the behavior of the server connection pool. • iRule commands also can control OneConnect™ behavior: • “ONECONNECT::detach disable” will cause the client and server to stay connected (as if OneConnect™ was not enabled). • “ONECONNECT::reuse disable” will cause the recently used server-side connection to be discarded after use.

  29. HTML server pool GIF server pool ASP server pool b.gif sales.htm c.asp e.gif a.gif d.gif index.htm f.asp OneConnect ™ New and Improved HTTP Request Pooling • Streamlines single client request to BIG-IP • Enabled by HTTP 1.1 • Avg. Reduction is 20 to 1 per Web Page 20 b.gif c.asp a.gif index.htm 1 b.gif c.asp a.gif index.htm 1) OneConnect ™ Content Switching • Intelligent load balancing to dedicated content servers • Maintain Server Logging index.htm b.gif a.gif b.gif c.asp a.gif index.htm c.asp 2) OneConnect ™ HTTP transformations New • Transformation form HTTP 1.0 to 1.1 for Server Connection Consolidation One b.gif c.asp a.gif index.htm Many b.gif c.asp a.gif index.htm 3) OneConnect ™ Connection Pooling • Aggregates massive number of client requests into fewer server side connections b.gif c.asp a.gif index.htm Server sales.htm e.gif d.gif f.asp

  30. OneConnect™ Facts • OneConnect™ does not effect the parsing of HTTP nor the execution of iRule events like HTTP_REQUEST or HTTP_RESPONSE. • iRule events are triggered for every request regardless of whether the OneConnect™ profile is being used or not. • Without OneConnect™, the first request will be load-balanced to a member within the selected pool. Subsequent requests will NOT be load-balanced to other members within the same pool. • If the pool selection changes, then a new load-balancing selection will be made. • A change in the persistence key will not trigger a new load-balancing decision and therefore will appear not to be working. • LB::detach or OneConnect™ will cause a new load-balancing decision to be made on every request. • After each request, the pool is NOT reset to a default pool. Any previous pool selection is always the default. • Unless you explicitly set a pool in all conditions, you may believe that a request is not getting load-balanced correctly when OneConnect™ is not enabled. • OneConnect™ tracks the connection by the locally originating IP address. • Using a SNAT will effect the criteria for reuse of the server connection. • If you are using a SNAT with OneConnect™, it’s possible that two different client’s requests will share the same server connection. • If this is not acceptable behavior, then disable reuse by either setting the source mask to none or using the ONECONNECT::reuse disable iRule command.

  31. Content Spooling Problem: TCP Overhead on Servers • There is overhead for breaking apart…”chunking” content • Client and Server negotiate TCP segmentation • Client forces more segmentation that is good for the server • The Servers is burdened with breaking content up into small pieces for good client consumption Solution Slurp up server response Spoon feed clients Benefit: Increases server capacity up to 15%

  32. HTTP Compression

  33. HTTP Compression • Compression works most efficiently when rechunking responses. • An unchunked response must be completely buffered while being compressed since the new content-length can’t be determined until compression is completed. This can introduce significant latency. • When compression is enabled, setting the profile setting “response selective chunk” or “response rechunk” are highly recommended. A clear conscience is usually the sign of a bad memory.

  34. HTTP Cache

  35. What is RAM Cache • RAM cache is a cache of HTTP objects stored in the BIG-IP system's RAM that are reused by subsequent connections to reduce the amount of load on the back-end servers • Ram cache became available in 9.0.5 • Ram cache is an additional module • It is part of the “Application Accelerator” Package • It is integrated with the HTTP profile • Cache is defined in RFC 1945

  36. The RAM Cache feature provides the ability to reduce the traffic load to back-end servers by caching High demand objects, Static content, and compressing content. What is RAM cache used for? • High demand objectsThis feature is useful if a site has periods of high demand for specific content. With RAM Cache configured, the content server only has to serve the content to the BIG-IP system once per expiration period • Static contentThis feature is also useful if a site consists of a large quantity of static content such as CSS, java script, or images and logos. • Content compressionFor compressible data, the RAM Cache can store data for clients that can accept compressed data. When used in conjunction with the compression feature on the BIG-IP system, the RAM Cache takes stress off of the BIG-IP system and the content servers

  37. What can RAM cache cache? The RAM Cache feature is fully compliant with the cache specifications described in RFC 2616, Hypertext Transfer Protocol -- HTTP/1.1. This means that you can configure RAM Cache to cache the following content types: 200 (Ok), 203 (Non-Authoritative Information ), 206 (Partial Content ), 300 (Multiple Choices ), 301 (Moved Permanently) , and 410 (Not Found ) responses. Responses to GET methods by default Other HTTP methods for URIs specified in the URI Include list or specified in an iRule. Content based on the User-Agent and Accept-Encoding values. The RAM Cache holds different content for Vary headers.

  38. What can RAM cache cache? • By default, only responses to GET methods are cached. • Data that is encoded as PUBLIC can be cached • Non GET method can be cached by including the URI in the include list or they can be overridden from a rule. • Conditional GET’s and HEAD’s can be answered based on cached data. • Range requests are passed up to the server.

  39. What items will not cache? • The items we do not cache: • Private data specified by cache control headers • No-cache forces caches to submit the request to the origin server for validation. • No-store tells the cache server not to store the object • Must-revalidate forces the cache to obey freshness information • HEAD, POST, PUT, DELETE, TRACE, and CONNECT methods • Any data that is marked as “un-cacheable” by the server via its cache control headers are not cached. (this can be overridden via rule or including the uri in the include list)

  40. RAM Cache Header manipulation • Enabling the RAM Cache on a virtual will cause the “HTTP/1.1” string in request headers to be rewritten to “HTTP/1.0”. • The server thinks it’s talking to a 1.0 client for simplicity sake. • A “Connection: Keepalive” header will be added to allow persistent server connections. • Remove all cookie headers. • The following headers are hop by hop headers and will be modified accordingly when served: • Connection: • Keep-Alive: • Transfer Encoding: • Add the Date header. (This reflects the current time on the BIG-IP), • Add the Age header. (This reflects the amount of time the document has been in the cache). • All other headers are considered end to end and stored as is.

  41. RAM Cache Header manipulation • Header manipulation for Cached Content • The following headers prevent caching of an object: • Authenticate: • WWW-Authenticate: • Proxy-Authenticate:

  42. SSL Offload

  43. SSL Profiles Overview A profile is a collection of protocol, application, or other feature-specific attributes. One or more profiles are associated with a virtual server. A profile tells a virtual server how to process traffic destined for it, based on the profile's configuration. For example, the ability to process SSL traffic is configured using the SSL profile.

  44. SSL Profile Overview TCP hudproxy Client Side Server Side Recv Request Send Request Clientside Serverside TCP hudfilter SSL hudfilter HTTP hudfilter TCP hudfilter • Hudfilters: Dev term for profiles. Modular filters chain together to customize traffic. • Hudproxy: Dev term for 9.x Full Proxy engine software where LB, iRules, SNATs etc reside.

  45. SSL Profiles Overview TMM is full-proxy engine treats client and server sides of a connection as completely independent. the proxy engine is considered a “connection-broker” that relates these two independent connections. TMM uses profiles to adjust application, or feature-specific attributes.

  46. SSL Profiles Overview Profiles begin at Transport layer and cover different aspects of the TCP stack’s application layer. “Protocol” profiles reside at the Transport layer of the tcp/ip protocol stack. “Services” profiles reside at the Application layer etc. “SSL” profiles reside between the App/Transport layers.

  47. DoS and SynFlood Protection

  48. Syn Cookies: Concept The concept behind a syn_cookie is to help protect servers from DOS of the initial TCP simple handshake “SYN Flood”. In setting up a TCP handshake the requesting client will send a SYN packet to the destination server. The server will respond to the client's SYN packet with a SYN/ACK to acknowledge the request and open a TCP socket for the requester. This can create a resource issue for the server if a large number of unfulfilled SYN request are directed to the server. Since the server will respond by opening a new socket for each request and then wait for the client to send an ACK in response to the server's SYN/ACK.

  49. Syn Cookies: Sockets & resources A little about servers and Sockets and how a DOS using initiating SYN packet impacts a server. When a server receives a Syn request for initiating a TCP handshake it will respond to the initiating client with a SYN/ACK and open a TCP Socket for the client to continue the initiated TCP session. Servers have a limited amount of open Sockets that can be utilized at one time. Once all of the possible sockets have been placed in an Active or Wait state (waiting for the client to continue their tcp session with a subsequent ACK) the server can no longer accept new connections. This effectively stops access to the server..

  50. Syn Cookies: Concept Once the server has utilized all it's resources in “half” open sockets from an overwhelming number of SYN request it will begin to refuse new “legitimate” SYN request and stop honouring legitimate current client connections. To avoid this state the idea of a SYN tracking system was invented. This is where the name “syn cookies” was coined. The Syn Cookie is quite different form traditional HTTP style cookie for several reasons. The cookie is not given to the client and is not presented by the client on subsequent connections. A local cache on the server is created to track known cookies. More or less the cookie is a self generated and stored cookie for the client connection on the local server.