Skip to content. | Skip to navigation

Personal tools


You are here: Home / Wiki / Clientselfconfig


Client "Self Configuration"

Client "Self Configuration"


Emulab uses a largely "node driven" strategy for node configuration; i.e., it is a pull-style process with the nodes requesting information from Emulab Central and configuring themselves.

We use the same basic framework for configuring:

  • local cluster physical and virtual nodes
  • link emulators nodes
  • local wireless nodes
  • IXP and NetFPGA network processors
  • Stargate SBCs
  • remote Internet nodes
  • PlanetLab slivers

and for OSes including:

  • FreeBSD 4, 5, 6, 7, and 8
  • OpenBSD 3
  • Redhat 7, 9 and Fedora 4, 6, 8, 10
  • Ubuntu 7.0
  • Windows XP

The self-configuration strategy requires that the OSes to be run on nodes have a certain set of standard tools installed and have Emulab client code installed to handle the self-configuration. The configuration code is mostly Perl and shell scripts, with a couple of C and C++ programs. The code resides mostly in the /etc/emulab and /usr/local/etc/emulab directories on the node with a single hook to trigger everything (described later).

In the case of virtual nodes (or "subnodes" which are otherwise dependent on a "host"; e.g., IXPs), the configuration may be split between the physical (host) and virtual (sub) nodes. In this situation, the physical node boots and begins self-configuration via the scripts. Part of that self-config in this case is to create, initialize and boot any virtual nodes that it hosts. Typically, the physical host will perform only the initializations that the virtual hosts themselves do not have the privilege to do; e.g., initializing (virtual) network interfaces. Ideally, most virtual node configuration will then be done in the context of the vnodes themselves. This requires that the virtual nodes' filesystems contain a copy of the Emulab startup scripts.

Little attempt has been made to constrain the "execution environment" of the configuration scripts and programs; they largely execute as root and use the very wide "API" provided by the OS and its tools.

Node self-configuration happens on every reboot, not just the first reboot. This includes potentially resource-intensive operations such as per-node route calculation. While affording a great deal of flexibility for re-configuring nodes, it comes at a cost; the resource footprint of self-configuration is significant (but only at boot time).

In general, the impact of the self-configuration process on the fidelity of an experiment node can be summarized as:

  • artifacts (additional scripts and applications) in the filesystem,
  • non-standard, potentially resource intensive, boot path
  • non-standard accounts and mounts present and services running.

Note that the last is a consequence of the Emulab model in general and is not specific to the node self-configuration process. However, there is some small dependence on the accounts and mounts.

The Process

The first step in the configuration process if for a node to identify the so-called "boss" node (Emulab Central) and (possibly) configure the network to talk to it. How this is done depends on the type of the node. For cluster and wireless physical nodes which are connected directly to the Emulab control network, the nodes use DHCP to discover and configure the control network and then (somewhat arbitrarily) use the returned name server as the boss node. Remote physical nodes have the boss identity hardwired in (usually on a CD, flash dongle, or floppy) and use their default IP routing setup to reach it.

As mentioned there are a set of scripts, each responsible for initializing some part of a node's configuration; e.g., accounts, NFS shared mount points, network interfaces, routing, etc. The scripts are invoked with one of four actions: boot, shutdown, reconfig or reset. The first three pretty much do what you would expect: set things up at boot time, tear things down at shutdown, and perform a dynamic reconfiguration while running. The last is a special action used when creating disk images; it is invoked to cleanup any on-disk state created by the boot or reconfig actions.

While the same core set of scripts and libraries are used for initializing all node types, many of the scripts have large quantities of conditionalized code for various node types. In the OS dimension, there is much less runtime conditionalized code. There is a common set of scripts and libraries for all OSes and distinct sets of per-OS scripts and libraries.

In general, the scripts operate in one of four ways:

  1. Create local files containing per-node or per-experiment information for other scripts and programs to use and then exit. These data are generally little turds like the canonical name of the node, the creator of the node's experiment, keys for other services to use, or descriptions of the experiment topology.
  2. Perform the appropriate configuration actions themselves and then exit. Examples include adding local accounts or mounting remote filesystems.
  3. Create another OS-specific shell script in /var/emulab/boot, possibly invoke that script to perform the appropriate configuration, and then exit. These include the configuration of interfaces and routes.
  4. Create a command line or configuration file for an Emulab service, "detach" from the setup process (fork and setsid), and run the service as a child, staying alive til the service dies or is killed. Standard Emulab event agents such as the program agent or traffic monitoring or generation agents, are invoked in this way.

Note that the exact intent of the "create another script and run that" (as opposed to "just do it") is lost in the mists of time, but I think we had envisioned a more efficient boot path, with nodes contacting Emulab Central only at initial boot up and then creating the "fast path" boot scripts to use for future reboots. However, this is not what we do now. Reboots and reconfigurations always re-contact Emulab Central.


Most of the configuration information is passed to the nodes through the Testbed Master Control Protocol via the programs tmcc (the client) and tmcd (the server). Note that we generally refer to the protocol as TMCC or TMCD rather than TMCP, because...well, just because that is what we do!

TMCC/TMCD/TMCP is a custom ASCII-based, total hack protocol. The API for TMCC is eclectic. For the most part it consists of single word requests about a particular type of information for a single node (e.g., "ifconfig", "routes", "accounts"). The returned information is in one or more easily-parsable lines of KEY=VALUE pairs. Typically, the target node of the command is implied by the IP address from which the request comes, though the target may also be explicitly specified.

Evolution of the API is largely "convenience driven." If there is something small we need quickly, we tend to add a command to TMCC in the most straightforward manner. TMCC is by no means the future of node-configuration protocols.

The tmcc client is just a mechanism for returning DB information, it does not perform any actions on its own. For example you use tmcc to request information about which user accounts should be set up on a node, it does not actually modify the password file. Invocations of tmcc are made from shell/Perl scripts which do the actual client-side customization. The client can communicate with the server via UDP, TCP or SSL over TCP. Some commands can only be performed over certain transports. For example, account info can only be returned via SSL+TCP since a password hash is part of the returned info.

TMCD authentication

First, it is important to understand the threat model of Utah's Emulab: we assume no malicious users. Users who violate this trust will be banned. This lowers the bar for authentication from within Emulab networks. External threats are stopped by blocking of the TMCD protocol at the Emulab firewall.

Internal authentication of the protocol is still needed, but only at the level required to prevent accidents (though in most cases we are much stronger).

When using SSL+TCP, the server (boss) is authenticated via the well-known Emulab CA. TMCD will return a server certificate that can be authenticated via the CA certificate embedded in all Emulab images. When using TCP without SSL or when using UDP, there is no explicit server authentication. With the Emulab shared (by all experiments) control network, spoofing of boss is possible via a man in the middle attack. While this makes it possible for someone outside an experiment to mis-configure a node, they cannot effect certain critical calls, in particular "accounts" and "mounts". As mentioned, the former will only return info via an SSL connection. The latter could be spoofed, but the NFS server only exports the allowed filesystems for an experiment, so attempts to mount unauthorized FSes from the node will fail.

Boss authenticates clients via their IP address. The unspoofability of this address is assured through the architecture of the Emulab control network. The control network is broken into three segments: one for boss, one for users/fs/ops, one for the nodes. ARP entries for nodes are hardwired into the Emulab gateway between these segments. Thus, even though Node A may claim to be Node B in its IP packet to boss, boss will always respond to the real Node B, and no handshakes will be completed or data returned to the wrong machine.

Note that we have prototyped a stronger node authentication mechanism that uses a TPM where available.

TMCD caching

As will become clear in the following sections, there are a lot of different TMCD calls. To promote scaling, tmcc has a caching mechanism that is implemented in a Perl wrapper library around the C program. Since most TMCD information is static and sent from boss to the nodes, caching is simple. At a very early stage of node configuration, a single fullconfig call is made to download the majority of node configuration information in one transfer. This information is used to populate files in /var/emulab/boot/tmcc; each file being named after the appropriate TMCD call. When subsequent tmcc calls are made via the wrapper library, as all rc.* scripts do, the cache is consulted before any call is made.

TMCD calls that cannot or are not cached are denoted as such in the API documentation. Mostly these are calls that report information from nodes to boss or are deprecated.

Configuration Process

Here we look at the configuration scripts run for the three most common cases of local cluster node boots: FreeBSD, Linux and Windows XP (using Cygwin). There are other variations of booting, including for: widearea nodes, wireless nodes, Stargates, MFSes, etc. but those may be discussed some other day.

Configuration Process (FreeBSD)

Configuration Process (Linux)

Configuration Process (Windows/Cygwin)

The "rc" configuration scripts


This is not an actual script, but is a function in The most important function it performs is to call the TMCD fullconfig command to populate the tmcc cache.


Customize the user account setup on a machine. Uses the TMCD accounts command to obtain UNIX-style password and group file information as well as SSH and (deprecated) SFS public key information. User and group information is merged with the existing system files, Emulab-based accounts are added, removed, or modified as required. Accounts that existed before before Emulab started up (e.g., those that were in the OS image) are left alone. If the designated home directory does not exist, it is created and populated. This is typically not the case on cluster nodes where homedirs are mounted via NFS from the fileserver machine. The passed SSH keyinfo is used to populate the users .ssh/authorized_keys file.

On Windows, this script creates both a native Windows user account and a Cygwin account. The SSH keys are strictly for the Cygwin environment.

Currently, execution of this script is mandatory as many if not most Emulab operations execute as the swapper of the experiment to which a node belongs (e.g., the program agent). Removing creation of user accounts is doable, but will require some work and a fundamental rethink of what it means to interact with an experiment!


A shell among Perls. A simple shell script worthy of /usr/local/etc/rc.d that starts the Emulab "canary" daemon. This daemon monitors resource usage on a node and reports when it detects dangerous (overload) conditions ("Canary in a coal mine", get it?)

This script is optional.


Handle early post disk-imaging OS specific actions. In a Windows/CygWin environment, this script does a lot of magic, mostly glue functions to make sure the OS-independent scripts work correctly. This includes insuring that network interfaces are correctly identified and in a known state, making sure that tmcc is working correctly, creating a /etc/resolv.conf file, and making sure that sshd is not running. It also fires off the EmulabShutdown service to insure that node shutdown actions are performed.

This script is crucial for the correct setup of the Emulab environment on Windows.


Configure link shaping characteristics on both dedicated "delay" nodes and on topology nodes which are doing "end-node" shaping.

This script was late to the party and is just a wrapper for an OS dependent script, delaysetup, that does the actual work. This is a historic artifact--shaping was originally only supported on FreeBSD-based delay nodes and thus there was no need for a "generic" script. We later added end-node shaping of FreeBSD and Linux nodes. (See the Wiki for more information on end-node shaping or for mind-numbing detail on the shaping implementation.)

As noted, this script calls delaysetup and then rc.delayagent on end-node shaping nodes.

This script is mandatory but only does anything on delay nodes or end-node shaping nodes. Making this script optional would mean that a user would have to supply their own shaping mechanism. For the most part this would be fine, but linktest and the delay agent (for dynamic modification of link attributes) would have to be modified to work in such an environment (or disabled entirely).


Configure link shaping characteristics on both dedicated "delay" nodes and on topology nodes which are doing "end-node" shaping. This script uses the TMCD delay or linkdelay command to obtain shaping information, depending on whether the node is a delay node or just doing end-node shaping.

Briefly, delay nodes are user-visible (though not in the topology description) nodes which sit on links between topology nodes to "shape" those links. Two interfaces on a delay node are bridged together, with filtering in-between, to provide emulation of network characteristics. Currently delay nodes run FreeBSD (4.10, 6.x or 7.x) and use DummyNet and IPFW to provide link emulation.

As a delay node is dedicated to shaping traffic, the delaysetup script configures the running kernel to provide the best possible service for this purpose. This includes operating the network interfaces in polling mode (as opposed to interrupt driven) and running the kernel at 10,0000 HZ (100us clock resolution) to ensure that the clock is running with sufficient granularity to provide accurate emulation. The script then uses the information returned from the delay call to configure bridges, IPFW and DummyNet as appropriate.

End-node shaping does link emulation on the experiment nodes themselves. For this case, we have different versions of delaysetup for FreeBSD and Linux. FreeBSD nodes still use DummyNet and IPFW. Linux nodes use tc and iptables. Both use the information returned by the linkdelay call to configure filtering as appropriate.


An OS-dependent script for firing up the Emulab delay-agent event agent. This script marshals the necessary command line parameters:

  • the event server and keyfile,
  • the experiment identity (project and experiment name),
  • the vnode identity (if any),
  • the linkmap generated by delaysetup

and starts up that agent with those arguments.

This script is only run on nodes involved in end-node shaping.


Configure nodes for use in a firewalled experiment. Uses the TMCD firewallinfo command to obtain information for each node in an experiment. The information is different depending on whether the node is the firewall itself or is behind the firewall. In general, only the firewall node itself requires configuration. Conceptually the firewall node has two interfaces, the "outside" one connecting to the real node control network, and the "inside" one connecting to all the nodes (in the current implementation, these two interfaces are actually switch VLANs carried over the single control net interface).

rc.firewall does very little work itself, instead calling out to OS-specific functions to do the heavy lifting. Currently, only FreeBSD 4.10 (yes, that is 4.10) nodes can be firewalls and only that configuration code exists.

Execution of this script is optional.



Handle early post disk-imaging OS specific actions. Currently, the only function is to do a first-time boot discovery of a suitable partition for swapping (backing store) and enable it. Any discovered device is recorded in /etc/fstab so that no further discovery is necessary during future boots.

This script could be optional.


An obsolete script to start up a hardware "health monitoring" daemon on FreeBSD or Linux nodes. This daemon would monitor hardware voltages and temperatures using the healthd program. healthd only ran on older motherboard chipsets and hence was only used on a subset of Emulab nodes.

This script is no longer installed in images.


Creates the /etc/hosts file reflecting all the hosts in the experimental topology.

The script first tries to use information in the topology description file downloaded by the TMCD topomap command. If the topomap does not exist, it uses the TMCD hostnames command to download the information.

If no topology file is present and hostnames returns no information, the existing hosts file is left alone.

If information does exist, the current hosts file is overwritten with information about all other hosts in the experiment. Every host with an IP address will be represented.

NOTE: the hostnames written to the file are the user selected "virtual" names and are not fully qualified. Fully qualified virtual names refer to a node's control network interface.

NOTE: not every host in the generated hosts file will be reachable. If routing is not enabled or a node is otherwise disconnected from the topology, it will not be reachable ("ping-able").

This script is always run but could easily be optional.


Uses the TMCD ifconfig command to return link and IP configuration information for wired Ethernet network interfaces.

For physical interfaces, it configures the speed and duplex of the link. Note that this is independent of the user-specified link shaping characteristics which are implemented in software, and are configured via rc.delays. The speed of the link is set here to the closest hardware supported value that is greater than what is needed for the shaped value of the link (or the aggregation of multiplexed virtual links that might be using this link).

For virtual interface devices the script may need to configure encapsulation characteristics such as a VLAN tag or association with a physical interface.

For both types, it configures the IP address and mask and runs the generated routing script to load routes related to each interface. Note that not all interfaces will get IP information, though this is not a user controllable feature (see below), it is only for interfaces that are used as "carriers" for multiplexed links or are acting as bridges on delay nodes.

The actual interface configuration commands are generated by calls to an OS specific library since the mechanisms for this vary widely.

Once the interfaces are configured, the rc.route generated script is used to enable any static IP routes required for each interface.

This script is mandatory in the sense that, if you have defined a link in your NS files, Emulab will assign it end-point IP addresses and this script will run to configure them. This is easily fixed however. The consequences of not having IP addresses assigned are minimal: 1) linktest won't work, 2) routing won't be setup, 3) no /etc/hosts file will be generated, 4) traffic generators cannot be run. All but the first would be expected. It would still be desirable to have some form of linktest that can check connectivity, bandwidth, and loss without IP.


Uses the TMCD ipodinfo command to return configuration information for the Emulab-specific ICMP Ping of Death (IPoD) kernel feature. IPoD hooks an unused ICMP packet type to tell a machine to reboot. It is useful in the not uncommon situation where an operating system is still responding and running interrupt-level code (the bottom half) but not processing user-level or non-interrupt kernel code (top half). The Emulab boss will send an IPoD when rebooting a node, after it has tried "ssh reboot" and before it power cycles it.

IPoD has been implemented for Linux and FreeBSD and is used in most Emulab images as well as the MFSes. As used in Emulab, IPoD configures the kernel to respond only to packets from a particular host (boss) and requires that a one-time secret key be presented. Boot time configuration enables IPoD and sets the host and key that are required to force a reboot.

This script is optional; if it does not exist, IPoD is not enabled. While most Emulab configuration scripts are written in Perl, this one is typically written as a shell script so that it can be used in minimal MFS environments that might not have Perl installed.


Fetches experiment related keys.

Download experiment-related keys used to authenticate a node to Emulab infrastructure. Uses the TMCD keyhash and eventkey commands to download keys and write them into files in /var/emulab/boot.

The "keyhash" is a per-experiment secret key used to authenticate with the Emulab web server when downloading content. This mechanism is currently used by nodes for downloading tarballs and RPMs. (topo files?) The key is passed as part of the URL to the web server where it is verified. The key is generated on boss when an experiment is created.

The "eventkey" is a per-experiment secret key used to authenticate with the Emulab event system. This key is used in the generation and checking of the HMAC on every event. The key is generated on boss when an experiment is created.

This script could be optional if the keys were not needed; i.e., not using the tarball or RPM mechanism and using unauthenticated (or no) events.


An OS-specific script to handle first-time boot actions related to kernel naming.

On FreeBSD systems before 5.0, it ensures that /kernel refers to the correct kernel image so that "ps" and friends work. This is no longer needed.

On Linux systems that used LILO as their boot loader, it insures that the installed LILO boot info is in synch with /etc/lilo.conf, running the lilo command if they are not. This is needed if a different kernel or kernel command line is needed.

All functions of this script on all OSes are obsolete and this script should be removed. If a future use is discovered, it should be put in rc.osname instead.


Runs the Emulab wireless link control agent. Uses the interface information returned by the TMCD ifconfig command to determine if there are any wireless interfaces. If so, it invokes link-agent to manage them.

NOTE: this script does not detach itself before running the link agent. Whether this is by design or accident is not clear.

This script could be optional.


Runs the Emulab linktest agent. Uses the already fetched project, experiment, swapper and event keyfile information to execute linktest on the node.

NOTE: this script does not detach itself before running the link agent. Whether this is by design or accident is not clear.

NOTE: this does not actually test any links, just starts the agent. Link testing is triggered by events.

This script is always run, but could be optional.


Handle early post disk-imaging OS specific actions. Currently, the only function is to do a first-time boot discovery of a suitable partition for swapping (backing store) and enable it. Any discovered device is recorded in /etc/fstab so that no further discovery is necessary during future boots.

This script could be optional.


Perform "site localization" functions. Uses the TMCD localize command to return information necessary to customize a node to the local Emulab. Currently the only information returned is the root pubkey for the boss node, which this script installs in root's authorized_keys file.


Perform miscellaneous OS-independent actions.

Currently all this script does is to use the TMCD nodeid and creator to create files in /var/emulab/boot with those names. This is a one-off form of caching that pre-dates tmcc caching.

This script is required right now, but uses of the files it creates should be eliminated and this file removed.


Returns information necessary for logging the console of mote nodes. Uses the TMCD motelog command to return stuff.

That's all I got, sorry...


Mount remote filesystems. Uses the TMCD mounts command to obtain UNIX-style mount information. It handles both NFS and SFS style mounts, though only NFS is used currently. As with accounts, this script keeps track (via a BDB file in /var/emulab/db) of what it was previously told to mount, so that it can add, remove or modify members of the current set of mounts. The script creates local mount points as necessary. This script is not run on remote or PlanetLab nodes, BSD jail or OpenVZ vnodes.

For Windows, this script just creates Cygwin mount points of SMB mounts. Those actual network mounts happen as part of the regular Windows startup (I think); i.e., it mounts everything that we export via samba.

Currently, execution of this script is mandatory as the existence of user home directories and the shared /proj space is assumed in a number of places. In the future, we plan to reduce the dependencies on these directories; at least removing the assumption that they are shared among nodes. The future version of this script might populate local versions of these directories as necessary, possibly handling begin/end time synchronization with the "users" node.


Starts the Emulab program agent running on a node. Uses the TMCD programs command to return a list of program instances to run, and userenv to get a list of user-supplied environment variables. The command lines and environment variables are placed in the /var/emulab/boot/progagents file. The script detaches itself and then runs a single instance of the program agent which manages all the program activities described in the configuration file (using the supplied environment).

NOTE: the program agent is run even if there are no pre-scripted (NS file) program agent activities. In this case, the program agent runs with the UID of the experiment swapper. Why is this? Can we dynamically cons up a program invocation?


Enables routing on the experiment topology. Uses the TMCD routing command to determine the type of routing to use, and then "makes it so" by calculating routes, creating /var/emulab/boot/rc.route containing those and other pre-configured routes, and starting a routing daemon as necessary. See the description of routing for details on the required actions.

Currently, if dynamic routing is desired, the script creates a configuration for OSPF in the gated program, and starts it up. Since gated is no longer maintained (or even available), this option currently only works on older OSes. We need to move to quagga or zebra.

For static route calculation, the script uses our implementation of a distributed Dijkstra shortest-path algorithm, with each experiment node calculating its own routes.

This script must be run as root, and works for FreeBSD, Linux, and Windows/Cygwin (modulo the whole gated fiasco).

Note that this script runs before the corresponding interfaces are configured. Thus for static and manual routes, this script generates the /var/emulab/boot/rc.route script and uses it only to enable IP forwarding and to start up any routing daemon. The Emulab interface configuration script rc.ifconfig will later call this generated script to enable routes per-interface.

This script is optional and only runs if the user specifies that they want routing in their NS file.


Downloads and installs user-specified RedHat RPM packages. Uses the TMCD rpms command to get a list of RPMs that should be downloaded and installed. This script invokes the Emulab install-rpm utility to handle the actual downloading and installation.

install-rpm keeps a local database (/var/db/testbed.rpms of all RPMs installed along with their last-modification timestamp and an MD5 hash of the contents. The timestamp and hash are check to insure that the same RPM is not installed more than once unless it is changed.

For local nodes (except Windows nodes), RPMs are fetched across NFS. Otherwise they are fetched using wget from the Emulab web server. The node authenticates itself to boss with the private key downloaded by rc.keys.

NOTE: rc.rpms, along with rc.tarfiles, are notable exceptions to the general case where configuration scripts perform their actions on every reboot. RPMs are only installed on first boot or whenever the source RPM changes.

This script could be optional. No critical information is ever installed via RPMs.


Runs one or more instances of the NSE simulator on a node. Uses the TMCD nseconfigs command to return information about the instances.

NOTE: NSE support is deprecated.

This script is not needed.


Runs the Emulab slothd idle detection sensor. Uses the TMCD sdparams command to get site-specific configuration information and starts up the daemon. slothd measures indications of activity (load average, TTY and network activity) and periodically reports an aggregated summary to boss (via UDP port 8509). Boss uses these indications to determine if experiments are idle and can be swapped out. It is only run on physical experiment nodes and not on virtual or remote nodes.

While currently always run, slothd is not a required feature.


Uses the TMCD startupcmd command to get a command line to execute via the runstartup script.

This script is always run, but only does something if a startup command is defined.


Uses the TMCD syncserver command to determine a node's role in the experiment-wide emulab-sync configuration. It writes the fully qualified (i.e., control-network) name of the experiment's designated emulab-sync server node into the file /var/emulab/boot/syncserver. If the current node is the sync-server, then it starts up emulab-syncd.

This script could be optional if the Emulab sync-server mechanism is not needed. Currently however, there is no option to say this.


Downloads and installs user-specified "tarballs". Uses the TMCD tarballs command to get a list of tarballs that should be downloaded and installed. This script invokes the Emulab install-tarfile utility to handle the actual downloading and installation.

install-tarfile keeps a local database (/var/db/testbed.tarfiles of all tarballs installed along with their last-modification timestamp and an MD5 hash of the contents. The timestamp and hash are check to insure that the same tarball is not installed more than once unless it is changed.

For local nodes (except Windows nodes), tarballs are fetched across NFS. Otherwise they are fetched using wget from the Emulab web server. The node authenticates itself to boss with the private key downloaded by rc.keys.

NOTE: rc.tarfiles, along with rc.rpms, are notable exceptions to the general case where configuration scripts perform their actions on every reboot. Tarballs are only installed on first boot or whenever the source tarball changes.

This script could be optional. No critical information is ever installed via tarballs.


Uses the TMCD tiptunnels command to get the magic necessary to create a portal back to the time when serial consoles ruled the earth. Okay, it actually creates an endpoint that can be used for connecting to the serial console for one or more nodes. Only used for "motes" in the Utah Emulab, and motes went the way of...well, serial consoles.

NOTE: "TIP" stands for "Old Fart's Way of Talking About an RS232 Serial Line." Yeah, I know. Just roll with it.

This script is deprecated for now.


Downloads information about an experiment's network topology. Includes three types of information that gets written into three local files:

  • Basic connectivity information, used to generate routes and the hosts file.

Goes into /var/emulab/boot/topomap.

  • Logical characteristics of nodes and links, used by linktest.

Goes into /var/emulab/boot/ltmap.

  • Physical characteristics of nodes and links, used by linktest.

Goes into /var/emulab/boot/ltpmap. By default, this script "downloads" the files by copying compressed versions of the files across NFS. If NFS is not being used or the copy fails, it uses the TMCD topomap, topomap, and topomap commands to obtain the compressed data.

This script is re-run before every invocation of linktest since users may dynamically change the characteristics of their links after swapin.

This script is run as root, except on Windows where it is run as the user. This script is not run in MFSes or on remote nodes where there is no topology to speak of.

This script could easily be made optional. However, without it you would not be able to: 1) have all nodes compute their own routes at boot time, 2) have experiment node entries in /etc/hosts, and 3) run linktest.


Downloads TPM-attested identity information for a node. This is an experimental feature that is not in production yet. Uses the TMCD tpmblob and tpmpubkey commands to obtain node-specific TPM information used for TLS authentication of a node for TMCD. The values from both calls are placed in files on the local node where they will be used in subsequent TMCD calls.

This script is optional. As discussed above, there are other ways to establish node identity for TMCD.


Configure link tracing on a node. Uses the TMCD traceinfo command to determine which links should be monitored and in what way. This information is used to construct the correct command line parameters to one or more instances of the tracing agent. The script detaches itself and then starts up the multiple instances of the tracer in the background, cleaning them up when they die or an Emulab shutdown is done.

The actual tracing agent is a local tcpdump-mutation known as pcapper which uses libpcap to capture or summarize traffic on a link and makes the resulting information available via a TCP port. Moreover, it can be controlled via the Emulab event system. No doubt, with the correct parameters, it will get your laundry sparkly-clean as well!

This script is only run if tracing is enabled on some link on the node.


Starts up one or more traffic generation agents. Uses the TMCD trafgens command to determine which traffic generators to start and how on this node. This information is used to construct the correct command line parameters to one or more instances of the traffic agent. The script detaches itself and then starts up the multiple instances of the traffic generator in the background, cleaning them up when they die or an Emulab shutdown is done.

The traffic generator is a wrapped version of the tg traffic generator. The wrapping allows it to be controlled via the Emulab event system.

NOTE: tg can support traffic streams other than constant bit rate (CBR), but Emulab's current mechanism supports only CBR traffic.

NOTE: we used to support traffic generation out of NSE, but that support has been deprecated.

This script is only run if a traffic generator is located on this node.


Setup IP tunnels between Emulab nodes. Uses the TMCD tunnels command to get information about tunnel endpoints that should be configured on this node. We currently support the setup of two types of tunnels: GRE and "vtun" based on Linux and possibly FreeBSD. Someone who knows this better should fix this documentation.

This script is optional.