Skip to content. | Skip to navigation

Personal tools

Navigation

You are here: Home / Wiki / D710

D710

The "d710" Nodes

The "d710" Nodes

Machines and Interfaces

160 d710 PC nodes (pc401-pc560) consisting of:

  • Dell r710 2U servers
  • One 2.4 GHz 64-bit Quad Core Xeon E5530 "Nehalem" processor, 5.86 GT/s bus speed, 8 MB L3 cache, VT (VT-x, EPT and VT-d) support
  • 12 GB 1066 MHz DDR2 RAM (6 x 2GB modules)
  • 4 Broadcom NetXtreme II BCM5709 rev C GbE NICs builtin to motherboard
  • 2 Broadcom NetXtreme II BCM5709 rev C GbE NICs in dual-port PCIe x4 expansion card (one NIC is the control net)
  • 1 250GB 7200 rpm Seagate SATA disk (drive 0)
  • 1 500GB 7200 rpm Western Digital SATA disk (drive 1)

Switch Configuration

All interfaces on these machines will be connected to gigabit ports on ProCurve 5412zl switches (specifically, J8702A modules). There will be three new experimental-net switches (procurve3-procurve5) for these nodes, each with approximately 240 ports. A fourth new switch (procurve1) will act as the center of a 'hub and spoke' topology for the experimental network: it will be connected to the new switches and to the existing gigabit switch (cisco8) at 24.4 Gbps each, and to the existing 100Mbps switches using 4Gbps to 8Gbps trunks.

Most new nodes will have 4 experimental-net interfaces, as our current PCs do, but some will have 5. In contrast to our existing nodes, which mostly connect all of their interfaces to a single experimental-net switch, the new nodes will have their interfaces 'striped' across the new switches: each new PC will have at least one interface on each of the three new switches, so that large LANs can be constructed of all new nodes, without having to use any inter-switch bandwidth. The tentative layout is:

  • pc401-pc440: 4 interfaces: 2 x procurve3, 1 x procurve4, 1 x procurve5
  • pc441-pc480: 4 interfaces: 1 x procurve3, 2 x procurve4, 1 x procurve5
  • pc481-pc520: 4 interfaces: 2 x procurve3, 1 x procurve4, 1 x procurve5
  • pc521-pc560: 5 interfaces: 1 x procurve3, 2 x procurve4, 2 x procurve5

A fifth new switch (procurve2) will act as the control network for the new nodes.

Images and Kernel Support

Currently, the following Emulab standard images work on the machines:

  • FreeBSD:
    • 32-bit: FBSD72-STD, FBSD73-STD, FBSD83-STD, FBSD91-STD
    • 64-bit: FBSD73-64-STD, FBSD83-64-STD, FBSD91-64-STD
  • Fedora:
    • 32-bit: FEDORA8-STD, FEDORA10-STD
    • 64-bit: FEDORA8-64-STD, FEDORA8-64-OVZ-STD (OpenVZ)
  • CentOS:
    • 32-bit: CENTOS55-STD
    • 64-bit: CENTOS55-64-STD
  • Ubuntu10:
    • 32-bit: UBUNTU10-STD
    • 64-bit: UBUNTU12-64-STD
  • Windows:
    • 32-bit: WIN7-STD

Older FreeBSD and Fedora images will not work on these machines because they require a newer NIC driver that supports the Broadcom 5709 chipset. FreeBSD 7 and up, and Linux kernels >= 2.6.22 (although 2.6.20 and up may work), should support these NICs.

Caveats

  • Limited bandwidth to other node types. Until we update the software on our Cisco switches, we cannot "bond" links with the HP switches. Hence, the experimental interconnect is limited to "only" 10Gb via a link from cisco8 to procurve1. Likewise the control network has a single Gb link between cisco2 and procurve2.
  • No FreeBSD virtual node support. Virtual node support is provided through OpenVZ, allowing Linux-based virtual nodes. There are no current plans for providing jail-based FreeBSD virtual nodes.
  • No control net firewall support. For one utterly obscure reason, these nodes cannot be control net firewalls. This will be fixed in the future.