1 / 14

DETER Testbed Status

Kevin Lahey (ISI) Anthony D. Joseph (UCB) January 31, 2006. DETER Testbed Status. ISI 64 pc3000 (Dell 1850) 11 pc2800 (Sun V65x) 64 pc733 (IBM Netfinity 4500R). UCB 32: bpc3000 (Dell 1850) 32: bpc3060 (Dell 1850) 32 bpc2800 (Sun V60x). Current PC Hardware.

adara-pugh
Télécharger la présentation

DETER Testbed Status

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Kevin Lahey (ISI) Anthony D. Joseph (UCB) January 31, 2006 DETER Testbed Status

  2. ISI 64 pc3000 (Dell 1850) 11 pc2800 (Sun V65x) 64 pc733 (IBM Netfinity 4500R) UCB 32: bpc3000 (Dell 1850) 32: bpc3060 (Dell 1850) 32 bpc2800 (Sun V60x) Current PC Hardware Approx 1/3 of nodes are currently down for repair or reserved for testing

  3. UCB Minibed 8-32 HP DL360G2 Dual 1.4GHz/512KB PIII Special Hardware ISI • 4 Juniper M7i routers • 2 Juniper IDP-200 IDS • 1 Cloud Shield 2200 • 2 McAfee Intrushield 2600

  4. UCB 1 Foundry FastIron 1500 (224 GE ports) 10 Nortel 5510-48T (48 GE ports each) Gigabit Switch Interconnects UCB Minibed 6 Nortel 5510-48T(48 GE ports each) Current Switches ISI • 1 Cisco 6509 (336 GE ports) • 7 Nortel 5510-48T (48 GE ports each) • Gigabit Switch Interconnects

  5. ... Foundry FastIron 1500 1Gb (expandable) bpc2800s ... Nortel 5510s ... ... bpc3000s ... bpc3060s Current Configuration 1Gb VPN ISI UCB pc733s ... Cisco 6509 pc2800s ... 1Gb Nortel 5510 pc3000s ... ... Junipers ...

  6. New Hardware for 2006 ISI • 64 Dell 1850, identical to previous pc3000s • Dual 3GHz Xeons with 2GB RAM, but with 2MB cache instead of 1MB, and 6 interfaces instead of 5 • 32 IBM x330 (dual 1GHz Pentium IIIs with 1GB RAM) UCB • 96+ TBD nodes, depending on overhead recovery • Full Boss and Users nodes: • 2 HP DL360 Dual 3.4GHz/2MB Cache Xeon, 800MHz FSB, 2GB RAM • HP Modular Smart Array 20s: 12 x 500GB SATA drives (6TB) Combined • Nortel 5510-48T and 10Gb-capable Nortel 5530-24T switches

  7. New ISI Configuration ... pc733s Nortel 5510 Cisco 6509 pc1000s ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... pc2800s ... 1Gb 1Gb (10Gb later) Nortel 5510 Nortel 5510 pc3000s pc3000s ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... Junipers ... 2 x 10Gb

  8. DETER Clusters ISI UCB

  9. Progress (1) • People: • New ops guy (Kevin Lahey @ isi) getting up to speed • Reliability: • Daily backups for users and boss, 1-time tarballs for all other nodes • More robust Nortel switch configuration • ISI or UCB users/boss machines can run either or both clusters • Security: “Panic Switch” to disconnect from Internet

  10. Progress (2) • Emulab Software: • Unified boot image for –com1 and –com2 machines • DNS servers, IP addresses in Database • Click image with polling • Incorporated state of Emulab as of about 9/30/05 • Debugged at UCB, then installed at ISI • Firewall and experimental nodes must be resident on the same switch • Release/update procedure is still problematic – for discussion in testbed breakout

  11. In-Progress (1) • Reliability: • Automating fail-over between clusters (DB mirroring / reconciliation scripts) • Security: • Automatic disk wiping on a project/experiment basis • Automating leak testing for control/experiment networks • Performance: • Redoing the way emulab-in-emulab handles the control net (saves 1 experimental node interface) • Improving the performance of the VPN/IPsec links • Supporting a local tftp/frisbee server at UCB

  12. In-Progress (2) • Federation: • Supporting running federated experiments between separately administered Emulabs using emulab-in-emulab • Netgraph module to rewrite 802.1q tags as they pass through a VPN tunnel (similar to Berkeley and ISI link) • Configuration: • Incorporating EMIST setup/visualization tools into Dashboard • New Emulab Hardware Types: • Supporting IBM BladeCenters (currently testing with 12x2 BC) • Routers as first-class objects

  13. Network Topology • Open hypothesis: Inter-switch links may be a bottleneck • Foundry/Cisco-Nortel and Nortel-Nortel • Adding multiple 10GE interconnects • Exploring alternate node interconnection topologies • Example: connecting each node to multiple switches • Potential issue: Assign is a very complex program • There may be all sorts of gotchas lurking out there

  14. Other New Nodes on the Horizon • Secure64 • NetFPGA2 • pc3000s with 10 interfaces • Research Accelerator for MultiProcessing (RAMP) • 1,000 200-300 Mhz FPGA-based CPUs • Some number of elements devoted to FSM traffic generators • Many 10GE I/O ports • ~$100K for 8U box at 1.5KW

More Related