Xen-based Emulab virtual nodes
Emulab vnodes based on Xen virtual machines is still under development but is now generally usable by all users. Since Xen vnode support is still under development, expect these instructions to be subject to sudden and radical change!
The executive summary is that you specify a Xen vnode by setting a node in your topology to be type pcvm (or pcvm3000, etc) and then choosing the correct OSID for the vnode and host. Each vnode will have a minimum 256MB of RAM and a 6GB virtual disk. These are arbitrary constants that can be overridden to some degree; see below. Shared filesystems are available in the vnodes. Vnodes can be connected with shaped virtual links, though expect lower fidelity emulations than with physical links. Each vnode has a control net connection, but they have unroutable IP addresses so you cannot communicate to/from vnodes and the outside (of Emulab) world.
Before going further, you should look through the description of Emulab's OpenVZ virtual nodes since many (most) of the concepts and details are very similar and apply directly to XEN based virtual nodes. Go ahead, I'll wait here till you come back ...
Here is an example NS file for a three node, two link topology (node0 <=> node2 <=> node1):
set ns [new Simulator] source tb_compat.tcl # Nodes set node0 [$ns node] set node1 [$ns node] set node2 [$ns node] tb-set-node-os $node0 XEN-STD tb-set-node-os $node1 XEN-STD tb-set-node-os $node2 XEN-STD tb-set-hardware $node0 pcvm tb-set-hardware $node1 pcvm tb-set-hardware $node2 pcvm tb-set-node-failure-action $node0 "nonfatal" tb-set-node-failure-action $node1 "nonfatal" tb-set-node-failure-action $node2 "nonfatal" tb-set-node-memory-size $node0 512 tb-set-node-memory-size $node1 512 tb-set-node-memory-size $node2 512 # Links set link0 [$ns duplex-link $node0 $node2 100000.0kb 0.0ms DropTail] set link1 [$ns duplex-link $node1 $node2 100000.0kb 0.0ms DropTail] $ns rtproto Static $ns run
When selecting an OSID to use for your guest OS, the simplest choice is XEN-STD, which maps to the most recent version supported by Emulab (see the details section below). You may also use one of a select number of our standard images (that run on bare metal). The requirement is that the kernel be compiled with XEN PV support. As of this time, these standard images may be used:
- UBUNTU12-64-STD: Ubuntu 12.04 LTS, 64 bit. 3.2.46 kernel.
- UBUNTU11-64-STD: Ubuntu 11, 64 bit.
- FEDORA15-STD: Fedora 15.
- FBSD82-STD: FreeBSD 8.2.
- FBSD91-STD: FreeBSD 9.1.
All of these images are available on the Utah Emulab, but should also run on whatever Emulab you are using; ask your local administrator to import images you require from Utah's download directory.
As mentioned above, you can control the amount of memory each guest receives:
tb-set-node-memory-size $node0 512
The units is always MB, and the resource mapper will tell you if you asked for too much or you have exhausted the amount on memory available on the node. This is especially important on Emulab's shared nodes, where you are competing with other experiments for resources on the same physical host.
Controlling disk space
Each XEN guest is given enough disk space to hold the requested image. Most Emulab images are built within a 6GB partition, where about 1/2 of the disk space is used by the operating system files. If the remaining space is not enough for your needs, you can request additional disk space with the following NS statement:
$mynode add-attribute XEN_EXTRAFS 5
The units is always GB, and at this time we enforce a hard limit of 10GB. As with Emulab physical nodes, the extra disk space will appear in the fourth slice of your guest's disk. You can turn this extra space into a useable file system by logging into your guest and doing:
mynode> sudo mkdir /dirname mynode> sudo /usr/local/etc/emulab/mkextrafs.pl /dirname
Creating a custom image
Emulab allows you to create a Custom OS Image of a virtual node, much like you are able to create a custom image to run on a physical node. Your custom image is saved and loaded in much the same way as described for physical nodes in the tutorial.
The difference is in how you create the image after you have setup your virtual node the way you want it. Once you are ready, go to the Show Experiment page for your experiment, and click on the node you want to save. One of the menu options is "Create a Disk Image". Click on that link and follow the instructions. If you are customizing one of the Emulab provided images for the first time, you will need to complete the form and click submit. If you updating your own image, then you just need to click on the confirmation button.
For all practical concerns, a disk image created from a XEN node may also be run on a bare metal node. You might need to edit the image descriptor in the web interface, to add the required node types, but in general images are interchangeable. You may also click on the whole disk image on the web form, which will create an image that includes your extra file system (see above).
Xen VMs current can only run a particular Ubuntu 12.04 Linux (3.2.46 kernel). The support is more general than that, but this is what we have now. Ideally, you would be able to load any Emulab OS image in a VM and have it Just Work, but that is harder than it might sound. The way you get Xen-based VMs as opposed to OpenVZ is via the OSID specified. So don't try to get all creative with the tb-set-node-os lines in the example--just do what we say.
Each Xen-based vnode will have an 6GB virtual disk (implemented as an LVM shadow volume on the physical host). Each VM is given a minimum of 256MB of memory, until memory is exhausted. The Emulab resource mapper will enforce this based on the amount of physical memory the node has, and the amount of memory you want for each VM.
Link shaping of virtual links is done using tc running in dom0, which is a variant of Emulab's end node shaping. Note that you cannot dynamically modify link shaping parameters.
Controlling Xen VM's
If you are using Emulab's shared nodes, you can ignore this section; users are not able to log into the physical host (dom0).
When dom0 has booted up you can ssh directly to it and use either Emulab commands or Xen commands to control the domU's. These need to be run with root priveleges (use sudo). Some helpful Emulab commands:
# Halt all vnodes /usr/local/etc/emulab/bootvnodes -h # Boot all vnodes /usr/local/etc/emulab/bootvnodes -b # Kill all vnodes (destroy virtual disks after halting) /usr/local/etc/emulab/bootvnodes -k
Some helpful Xen commands:
# Start a domU /usr/sbin/xm create <some-config-file> # Halt a vm (clean shutdown) /usr/sbin/xm shutdown <vm name> # Kill a vm (unclean shutdown) /usr/sbin/xm destroy <vm name> # List all vm's /usr/sbin/xm list # Connect to the console of a vm /usr/sbin/xm console <vm name>
Modifying Xen VM instances
Xen VM's use a configuration file to specify various attributes about their setup. Once dom0 has booted you can modify these configuration files as you like. The configuration files live in /var/emulab/vms/<vm-name>/xm.conf. Typical things to modify are the kernel and memory size. It is possible to modify the disk sizes as well but you have to use LVM commands directly and usually you have to create all the disks at once.
Any changes will not be persisted across reboots of dom0 (or use of the bootvnodes command above), so save your changes somewhere if you need them.
Windows on Xen
In theory, yes. In practice, not yet. Running Windows VMs will require machines with VTx technology, which we will soon have in the form of d710 machines.