1 / 47

ScaleNet: A Platform for Scalable Network Emulation

ScaleNet: A Platform for Scalable Network Emulation. by Sridhar Kumar Kotturu Under the Guidance of Dr. Bhaskaran Raman. Outline. Introduction Related Work Design and Implementation of ScaleNet Experimental Results Conclusions Future Work. Introduction.

Télécharger la présentation

ScaleNet: A Platform for Scalable Network Emulation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ScaleNet: A Platform for Scalable Network Emulation by Sridhar Kumar Kotturu Under the Guidance of Dr. Bhaskaran Raman

  2. Outline • Introduction • Related Work • Design and Implementation of ScaleNet • Experimental Results • Conclusions • Future Work

  3. Introduction • Why protocol development environments • Rapid growth of the Internet and evolution of network protocols • Types of environments • Simulation, Real deployed networks and Emulation • Why emulation • In Simulation, we may not exactly model the desired setting. • In real deployed networks, it is difficult to reconfigure and its behaviour is not easily reproducible • Existing emulation platforms • Dummynet, NIST Net, Netbed etc.. • ScaleNet • An emulation platform for creating large scale networks using limited resources • Created several virtual hosts on a physical machine

  4. Challenges in building ScaleNet • Creating multiple virtual hosts and assigning routing tables to each virtual host and associating applications with virtual hosts • Routing between different IP aliases

  5. Outline • Introduction • Related Work • Design and Implementation of ScaleNet • Experimental Results • Conclusions • Future Work

  6. Related Work • Dummynet • Built with modifications to FreeBSD network stack • Emulates the effects of finite queues, bandwidth limitations and delays • Can not emulate complex topologies • Implementation exists only between TCP and IP • Can not apply the effects for selected data flows • Can not apply effects such as packet duplication, delay variation • Exists only for FreeBSD

  7. Related Work (Cont..) • NIST Net • Emulates the behavior of any network, at a particular router. It applies that network behavior on the packets passing through it • Won’t create any virtual hosts and it won’t do any routing • Designed to be scalable in terms of the #emulation entries and amount of bandwidth it can support

  8. Related Work (Cont..) • FreeBSD Jail Hosts • Creates several virtual hosts on a PM • Routing is not possible • Theoretical upper limit of 24 jail hosts per PM • Netbed • Extension of Emulab • Automatically maps virtual resources onto available physical resources • Uses FreeBSD jail functionality for creating several virtual hosts • Scales upto 20 virtual hosts per physical machine

  9. Related Work (Cont..) • User Mode Linux • It is a Linux kernel that can be run as a normal user process • Useful for kernel development and debugging • Can create arbitrary network topology • Runs applications inside itself at 20% slowdown compared to the host system • Lot of extra overhead in creating a virtual host, since entire kernel image is used for creating virtual host

  10. Related Work (Cont..) • Alpine • Moves unmodified FreeBSD network stack into a userlevel library • Uses libpcap for receiving packets and raw socket for sending outgoing packets and prevents the kernel from processing of packets destined for Alpine using firewall • If the network is too busy or machine is slow, this won’t work. Kernel allocates fixed buffer for queueing the received packets. If the application is not fast enough in processing these packets, the queue overflows and subsequent packets are dropped

  11. Comparison of Emulated Platforms

  12. Outline • Introduction • Related Work • Design and Implementation of ScaleNet • Experimental Results • Conclusions • Future Work

  13. Netfilter Hooks

  14. Netfilter Hooks (Cont..)

  15. Design and Implementation of ScaleNet • NIST Net • Applies bandwidth limitation, delay etc. • It is designed to be scalable in terms of the number of emulation entries and the amount of bandwidth it can support. • Linux • NIST Net exists only for Linux • It is so popular and good documentation is available • Kernel Modules • Modules can be loaded and unloaded dynamically • No need to rebuild and reboot the kernel

  16. ScaleNet Architecture Bind call Userlevel route command pidip_ioctl.c rt_init.c ioctl.c Pid_ip syscall_ hack Routing Tables chardev PID-IP values Kernel level IP-IP-out IP-IP-in Routing Tables NIST Net dst_entry_ export dst_entry object Kernel Data User-level program Kernel Module

  17. Illustration of packet passing between virtual hosts Source IP 1 Dest IP 3 Data Source IP 1 Dest IP 2 Source IP 1 Dest IP 3 Data Source IP 2 Dest IP 3 Source IP 1 Dest IP 3 Data Packet after changes Kernel Module IP-IP-in IP-IP-in IP-IP-out Source IP 2 Dest IP 3 Source IP 1 Dest IP 3 Data Source IP 1 Dest IP 2 Source IP 1 Dest IP 3 Data Source IP 1 Dest IP 3 Data Original Packet Virtual Host 1 2 NIST Net 3 NIST Net

  18. Processing of Outgoing Packets Capture Packet at netfilter hook NF_IP_LOCAL_OUT Pkt Src IP belongs to local virtual host No return NF_ACCEPT Yes Nexthop available No Return NF_DROP Yes Create extra IP header Dst IP <- nexthop Src IP <- Current V.H. Nexthop on same machine Dst MAC <- nexthop MAC No Yes

  19. Processing of Outgoing Packets (Cont..) Space available at beg. of sk_buff Add extra IP header at the beg. of sk_buff Yes No Create new sk_buff with extra space. Add extra IP header followed by rest of the packet. Space available at end. of sk_buff No Yes Copy original IP header to the end of sk_buff. place extra IP header at the beg. of sk_buff. return NF_ACCEPT

  20. Processing of Incoming Packets Capture packet at netfilter hook NF_IP_PRE_ROUTING pkt. dst. belongs to local V.H. return NF_ACCEPT No Yes Remove NIST Net marking Packet reaches final destination Remove outer IP header Yes No nexthop available return NF_DROP No Yes

  21. Processing of Incoming Packets (Cont ..) Change fields of extra IP header. Dst. IP <- nexthop Src. IP <- current V.H. Nexthop on same m/c Dst. MAC <- nexthop MAC No Yes Call dev_queue_xmit() return NF_STOLEN

  22. Virtual Hosts • Creating multiple virtual Hosts. • Assign different IP aliases to the Ethernet card and treat each IP alias as a Virtual Host. • #ifconfig eth0:1 10.0.0.1 • Assign routing tables to each Virtual Host. (According to the topology of the network)

  23. Association between Applications and Virtual Hosts • A wrapper program is associated with a virtual host. It acts just like a shell. • All the Application programs belong to a virtual host are executed in the corresponding wrapper shell. • To know the virtual host that a process belongs to, we traverse the parent, grand parent process etc. until we reach some wrapper program, which corresponds to a virtual host.

  24. System Call Redirection • bind and route system calls are hacked • Whenever a process tries to access/modify a routing table, first we find the virtual host of the process as explained in the previous section and the system call is redirected to act on that virtual host’s routing table instead of system’s routing table

  25. Outline • Introduction • Related Work • Design and Implementation of ScaleNet • Experimental Results • Conclusions • Future Work

  26. Experimental Results • NIST Net bandwidth limitation tests • Tests on the emulation platform consisting of 20 virtual hosts per physical machine • Tests on the emulation platform consisting of 50 virtual hosts per physical machine.

  27. NIST Net bandwidth limitation tests • Tests are performed using both TCP and UDP packets • Tests are performed in two cases. • Client and Server are running on the same machine • Client and Server are running on the different machines.

  28. TCP packets. Client and Server on the same machine • Excess throughput is coming than the applied one

  29. TCP Packets. Client and Server on the same machine. MTU of loopback packets changed to 1480 bytes. • Expected results are coming

  30. UDP Packets • Expected results are coming

  31. UDP Packets (Contd..) • Sending 17400 packets, each packet of size 1000 bytes. • Excess throughput is coming than the applied one

  32. Creating 20 Virtual Hosts per System A network topology consisting of 20 nodes per system

  33. Creating 20 Virtual Hosts per System (Cont..) • Sending 40000 TCP packets from 10.0.1.1 to 10.0.4.10. Each link has 10ms delay.

  34. Creating 20 Virtual Hosts per System (Cont..) • TCP window size is 65535 bytes. • There are 39 links between 10.0.1.1 and 10.0.4.10. Each link has 10ms delay in the forward direction and no delay in the backward direction. For 100 Mbps link, the transmit time for 65535 bytes is around 5ms. So RTT is 395ms. • The maximum possible data transferred is 65535bytes/395ms, i.e. 165911 bytes/sec. • We are getting throughput around 154000 bytes/sec. If we add headers size(9240 bytes), it is 163240. So we are getting expected results. So the emulation platform scales well for 20 virtual hosts per physical machine

  35. Creating 20 Virtual Hosts per System (Cont..) • Sending 40000 UDP packets from 10.0.1.1 to 10.0.4.10. Each link has 10ms delay. • Expected results are coming

  36. Creating 20 Virtual Hosts per System (Cont..) • Sending 40000 TCP packets from 10.0.1.1 to 10.0.4.10. Each link has 5ms delay. • Expected results are coming

  37. Creating 20 Virtual Hosts per System (Cont..) • Sending 40000 UDP packets from 10.0.1.1 to 10.0.4.10. Each link has 5ms delay. • Receive buffer at the destination drops some packets in case of b/w 1179648

  38. Creating 50 Virtual Hosts per System A network topology consisting of 50 nodes per system

  39. Creating 50 Virtual Hosts per System (Cont.. ) • Sending 40000 TCP packets from 10.0.1.1 to 10.0.4.25. Each link has 10ms delay. • Expected results are coming

  40. Creating 50 Virtual Hosts per System (Cont..) • Sending 40000 UDP packets from 10.0.1.1 to 10.0.4.25. Each link has 10ms delay. • Receive buffer at the destination drops some packets in case of b/w 1048576

  41. Creating 50 Virtual Hosts per System (Cont.. ) • Sending 40000 TCP packets from 10.0.1.1 to 10.0.4.25. Each link has 5ms delay. • Expected results are coming

  42. Creating 50 Virtual Hosts per System (Cont..) • Sending 40000 UDP packets from 10.0.1.1 to 10.0.4.25. Each link has 5ms delay. • Receive buffer at the destination drops some packets in case of b/w 1048576

  43. Outline • Introduction • Related Work • Design and Implementation of ScaleNet • Experimental Results • Conclusions • Future Work

  44. Conclusions • Created an emulation platform which emulates large-scale networks using limited physical resources • Several virtual hosts are created in each physical machine and applications are associated with virtual hosts • Routing tables are setup for each virtual host • With this emulation platform any kind of network protocol may be tested • Performance analysis and debugging can be done • In F. Hao et. al. 2003, BGP simulation is done using 11806 AS nodes. In ScaleNet, this can be done using about 240 systems. Similarly OSPF protocol and peer-to-peer networks can be studied.

  45. Outline • Introduction • Related Work • Design and Implementation of ScaleNet • Experimental Results • Conclusions • Future Work

  46. Future Work • Automatic mapping of user specified topology to the physical resources • Identifying and redirecting other system calls • Locking of shared data structures in case of SMP machine • Avoid changing MAC header • Analyzing the memory and processing requirements by running a networking protocol • System is crashing sometimes during the initialization of the emulation platform • Graphical user interface

  47. Thank You

More Related