320 likes | 814 Vues
2. Wall Street Key Issues. Network Fabric is an integral element ofthe solution and key to business success . 3. Latency: The race to Zero. Every Microsecond Counts. TM. 4. Use Case: Market Data. Since 2007, end-to-end latency has been reduced from many hundreds of microseconds to tens of microsecondsThis was achieved through a combination of: (1) faster CPUs (2) faster Network Interface Controllers, (3) accelerated middleware appliances, (4) ultra-low latency switches, and (5) a lot of tuni9459
E N D
1. 1 Low-Latency Networksfor Financial Applications Arista Networks, Inc
Andy Bechtolsheim
September 14, 2009
2. 2
3. 3 Latency: The race to Zero
4. 4 Use Case: Market Data Since 2007, end-to-end latency has been reduced frommany hundreds of microseconds to tens of microseconds
This was achieved through a combination of:(1) faster CPUs(2) faster Network Interface Controllers, (3) accelerated middleware appliances, (4) ultra-low latency switches, and(5) a lot of tuning
Reduction on latency was achieved while transaction ratesincreased dramatically, by as much as a factor of 10
5. 5 Market Data Example
6. 6 Proven Messaging Performance
7. 7 Minimizing Latency Reduce the number of software layers
Application-application send-receive
Avoid the operation system if possible
Minimize the number of context switches
Use ultra-low-latency network switches
8. 8 Switch Latency Comparisons
9. 9 End-to-end Latency
10. 10 Partnering with all leading NIC Vendors
11. 11 Working with all leading messaging vendors
12. 12 Latency Conclusions Latency requires optimizing all levels of stack
Arista’s switch latency is 1/5th to 1/50th of other “low-latency” Ethernet switches
Arista is delivering this value in partnership with all leading NIC and messaging vendors
Ethernet switch latency no longer a limitation
13. 13 Scaling the Network
14. 14 Cost-Efficient Scalability Applications require wire-speed performance for fabrics with 1000s of servers
Today the cost of fabric bandwidth increases dramatically as the size of the fabric increases
This needs to change in order to buildlarge-scale high-performance applications
15. 15 The Cost of Bandwidth
16. 16 Flat L2 Fabric Design Core Switch
Hundred of 10G ports
Wire-speed architecture
Leaf Switch
24 to 48 10G ports
4 or more 10G uplinks
Overall Capacity
> 10,000 ports
> 10 Tbps throughput
17. 17 MLAG (Multi-Chassis LAG)
18. 18 MLAG (Multi-Chassis LAG)
19. 19 MLAG Advantages Leverages existing LAG protocol
100% compatible with IEEE 802.3ad LACP
Standard protocol, no vendor lock-in
Millisecond Failover
Does not rely on spanning tree
Yet compatible with spanning tree
Does not require L3 Routing
Enables scalable active/active L2 networks
20. 20 Fabric Scaling Summary Bandwidth per server is determined by the non-blocking core/spine switching bandwidth
To achieve low latency and minimize jitterrequires a non-blocking core fabric design
MLAG is a simple and effective solution for scalable and reliable network design
21. 21 Arista EOSExtensible Operating Systems
22. 22 Arista EOS Modular Architecture
23. 23 Arista EOS Modularity
24. 24 Arista EOS Extensibility
25. 25 Citrix VPX Virtual Appliance Runs inside Arista Switch
Load-balancing with Application Security
Accelerates Web Application Performance
26. 26 EOS Extensibility: What is Next? Any application that runs on Linux
Third party or customer developed
APIs to the network switch state
Control and Datapath Interfaces
Cloud Flow Interface
27. 27 Arista Product Summary Ultra low-latency switch architecture
Highest density 10G switches in industry
Roadmap to 40 and 100 Gigabit Ethernet
Support for all 10/40/100G Physical Layers
Extensible EOS Software Architecture
28. 28