Personal tools
You are here: Home / Wiki / Utahhardware

Utahhardware

Hardware Overview, "Emulab Classic"

Hardware Overview, "Emulab Classic"

Test Nodes

  • 16 d820 PC nodes (pc601-pc616) consisting of:
    • Four 2.2 GHz 64-bit 8-Core Xeon "Sandy Bridge" processors, 7.20 GT/s bus speed, 16 MB cache, VT-x and VT-d support
    • Based on the Dell Poweredge R820
    • 128 GB 1333 MHz DDR3 RAM (8 x 16GB modules)
    • 4 Broadcom NetXtreme BCM5720C GbE NICs builtin to motherboard (only one in use)
    • 2 dual-port Intel X520 PRO/10GbE PCI-Express NICs
    • 250GB 7200 rpm SATA disk, 6 x 600GB 10000 rpm SAS disks Important Notes about the d820s

  • 160 d710 PC nodes (pc401-pc560) consisting of:
    • 2.4 GHz 64-bit Quad Core Xeon E5530 "Nehalem" processor, 5.86 GT/s bus speed, 8 MB L3 cache, VT support
    • Based on the Dell Poweredge R710
    • 12 GB 1066 MHz DDR2 RAM (6 x 2GB modules)
    • 4 Broadcom NetXtreme II BCM5709 rev C GbE NICs builtin to motherboard
    • 2 Broadcom NetXtreme II BCM5709 rev C GbE NICs in dual-port PCIe x4 expansion card (one NIC is the control net)
    • 2x 250GB 7200 rpm SATA disks Important Notes about the d710s

  • 160 pc3000 PC nodes (pc201-pc360) consisting of:
    • 3.0 GHz 64-bit Xeon processors, 800Mhz FSB
    • Based on the Dell Poweredge 2850
    • 2GB 400Mhz DDR2 RAM
    • Multiple PCI-X 64/133 and 64/100 busses
    • 6 10/100/1000 Intel NICs spread across the busses (one NIC is the control net).
    • 2 x 146GB 10,000 RPM SCSI disks
    • CD-ROM drive (GCR-82840N.) Important Notes about the pc3000s

  • 128 pc850 PC nodes (pc41-pc168), consisting of:
    • 850MHz Intel Pentium III processors.
    • Based on the Intel ISP1100 1U server platform (old reliable BX chipset).
    • 512MB PC133 ECC SDRAM.
    • 5 Intel EtherExpress Pro 10/100Mbps Ethernet ports:
      • 2 builtin on the motherboard (eth2/eth3 in Linux, fxp0/fxp1 in FreeBSD)
      • 2 on an Intel EtherExpress Pro 100+ Dual-Port Server Adapter (eth0/eth1 in Linux, fxp2/fxp3 in FreeBSD)
      • 1 on a single-port Intel EtherExpress Pro/100B Adapter (eth4 in Linux, fxp4 in FreeBSD)
    • 40GB IBM 60GXP 7200RPM ATA/100 IDE hard drive.
    • Floppy drive.

  • 40 pc600 PC nodes (pc1-40), consisting of:
    • 600MHz Intel Pentium III "Coppermine" processors.
    • Asus P3B-F (6 PCI/1 ISA slot) motherboard (old reliable BX chipset).
    • 256MB PC100 ECC SDRAM.
    • 5 Intel EtherExpress Pro/100B 10/100Mbps Ethernet cards.
    • 1 D-Link DWL-AG530 802.11a/b/g wireless NIC with external antenna.
    • 13GB IBM 34GXP DPTA-371360 7200RPM IDE hard drive.
    • Floppy drive
    • Cheap video card (Jaton Riva 128ZX AGP w/4MB video RAM)
    • All in a nice but overweight rackmount case on rails: Antec IPC3480B, with 300W PS and extra fan.

  • 18 pc3000w wireless PC nodes (pcwf1-18), consisting of:
    • 3.0GHz Intel Pentium 4 processors.
    • 1GB DDR-400MHz (PC3200) SDRAM.
    • 2 Netgear WAG311 802.11a/b/g (Atheros) wifi cards.
    • 2 Ethernet ports:
    • 2 Western Digital WDXL80 120GB 7200RPM Serial ATA hard drives.
    • Floppy and CD-ROM drive.

  • 60 pc2400w wireless PC nodes (pcwf105-201), consisting of:
    • 2.4GHz Intel Core Duo "Conroe" processors.
    • Based on Dell OptiPlex 745.
    • 1GB DDR-400MHz (PC3200) SDRAM.
    • 2 Netgear WAG311 802.11a/b/g (Atheros) wifi cards.
    • 2 Ethernet ports:
    • 2 Western Digital WDXL80 120GB 7200RPM Serial ATA hard drives.
    • DVD drive.

Servers

  • a users, file, and serial line server (users.emulab.net), consisting of:
    • Dell PowerEdge 2850 server
    • Dual 3.0GHz Intel Xeon processors, 800Mhz FSB
    • 4GB 400Mhz DDR2 RAM
    • Dell M1000 with 7TB of disk space
  • a DB, web, DNS and operations server, consisting of:
    • Dell PowerEdge 2850 server
    • Dual 3.0GHz Intel Xeon processors, 800Mhz FSB
    • 4GB 400Mhz DDR2 RAM
  • a motley collection of serial line servers, consisting of:

Switches and Routers

  • 7 Cisco 6500 series high-end switches. Five serve as the testbed backplane ("programmable patch panel").
    The other two, a 6506 and 6509 providing "control" interfaces for the test nodes with the 6509 hosting an MSFC router card and functioning as the core router for the testbed, regulating access to the testbed servers and the outside world.
  • 5 HP 5400zl switches connecting the d710 nodes. Three are part of the testbed backplane, one provides the control network, and the final is a "hub" switch interconnecting the other experimental switches via multiple 10Gb ports.
  • 1 Arista 7504 10Gb switch connecting the d820 nodes. Connected to the experiment network fabric via 4x10Gb interconnect to the 5412zl "hub" switch.

Power Controllers

  • 10 APC MasterSwitch AP9210 8 port power controllers.
    (The AP9210 is discontinued; replaced in the product line by the AP9211.)
  • 12 APC MasterSwitch AP7960 24 port power controllers.
  • 7 BayTech RPC27 20-port remote power controllers.
  • 24 BayTech RPC14 1U 8-port remote power controllers.

Racks

  • 13 Wrightline "Tech I" racks: 44U, 34" deep, 2 are 24" wide with cable management; the rest are 19" wide. (Aug 2001: these appear to have been discontinued or renamed.)
  • 12 Dell server racks.

Layout

All PC nodes have at least four ethernet ports connected to the testbed backplane. All of the over 2000 ports can be connected in arbitrary ways by setting up VLANs on the switches via remote configuration tools. The current PC and switch topology is shown here. One additional ethernet port on each PC is connected to the core router. Thus each PC has a full duplex 100Mbps or 1000Mbps connection to the servers. These connections are for dumping data off of the nodes and such, without interfering with the experimental interfaces. The only impact on the node is processor and disk use, and bandwidth on the PCI bus.