Skip to content. | Skip to navigation

Personal tools


You are here: Home / Wiki / PhantomNet / OEPC-Protected / OpenEPC Tutorial - using the profile driven PhantomNet portal

OpenEPC Tutorial - using the profile driven PhantomNet portal

Gain a working knowledge of OpenEPC in PhantomNet by following through this tutorial.

Foreword and Prerequisites

We now have two ways for experimenters to interact with PhantomNet. The Classic PhantomNet interface uses the original Emulab interface and makes use of NS files to specify the experiments. The (new) PhantomNet interface makes use of profiles, developed for the Apt testbed, to specify experiments. Under-the-hood both interfaces provide access to the same resources. Note, however, that over time new features will be developed primarily for the new profile driven interface. 

This version of the tutorial shows how to interact with PhantomNet via the profile driven PhantomNet interface. A version of this tutorial using the classic interface is available here.

PhantomNet sits atop Emulab, and relies on the latter for much of its core testbed functionality. We reuse much of the Emulab terminology as well (such as "experiment", "swap-in" and "swap-out"). PhantomNet diverges as it gets into the particulars of mobile components and functionality, which is the primary focus of this document (as it relates to OpenEPC).

The new PhantomNet interface uses profiles to specify experiment configurations. Profiles make it very easy to share your research artifacts. A typical research workflow might include: (i) selecting an existing profile which is close to what you need, (ii) modifying that profile according to your own research objectives and (iii) creating a new profile based on these modifications for your own use, or to share with others. In the remainder of this tutorial we will make use of a previously created profile to explore OpenEPC functionality in PhantomNet. We will cover how to modify existing profiles to create you own in other tutorials. (If you can't wait you can check out the Creating Profiles section of the The Apt Manual.)

You might also want to familiarize yourself with OpenEPC by reading through the OpenEPC documentation.

Basic OpenEPC Setup

This guide will walk you through the detailed steps necessary to create a basic 3GPP LTE/EPC network.  Such a setup consists of user equipment (UE), an EnodeB, Serving Gateway (SGW), Packet Gateway (PGW), Home Subscriber Server (HSS), Mobility Management Entity (MME), and some other supporting services. The goal of this portion of the tutorial is to have the UE connect to the Internet via the PhantomNet-provisioned LTE/EPC network.





Figure 1.LTE/EPC experiment topology



Figure 1 shows the topology of our basic LTE/EPC experiment. There are five EPC nodes whose names are under rectangle boxes and depicted by blue words. EPC nodes are Emulab physical or virtual machines connected to each other using LANs. For example, the enb1 node and the sgw node are connected by the “net d“ LAN. Each Emulab node can host one or multiple EPC logical instances. For example, the first Emulab node hosts only the UE (client1), while the third node (sgw) hosts the SGW and MME logical instances (and other mobility elements we will not cover in this tutorial). An EPC logical instance is a process running on a physical or virtual Emulab machine.

Even a basic LTE/EPC experiment needs at least five nodes: UE, eNodeB, SGW, PGW, and EPC-enabler.


Create a new PhantomNet experiment by logging in to the PhantomNet web UI. If you do not have any current experiments running you should land on the instantiate page by default. (Otherwise you can click on "Actions" and select "Start Experiment" from the drop down menu.) Click on the "Change Profile" button. To find the profile we will use for this tutorial, type "OpenEPC" into the search box. Select "Basic-OpenEPC" from the resulting list by clicking on it. This will show a description of the selected profile. Next click on the "Select Profile" button which will take you back to the "1. Select a Profile" page. Click "Next" to reach the "2. Parameterize" page. For this tutorial we will stay with the default options, so simply select "Next" to reach the "3. Finalize" page. This page will show a diagram of the topology that will be created for your experiment. On this page you need to select the "Project" in which the experiment should be created (in case you have access to more than one project). You might optionally also give your experiment a name at this point by typing into the "Name" field. Click "Finish". PhantomNet will now go through the process of creating and setting up your experiment. This will take a couple of minutes, so please be patient. When your experiment goes beyond the "created" state, the web interface will show more information of the resources allocated for the experiment and the current state of each node. For example the "Topology View" tab will show the topology of your experiment and hovering over a node will show its current state.

Note that you have to wait for your experiment as a whole to be in the "Ready" state before you can proceed with the tutorial. (Note: For this profile, status correctly updates in the "Actions->My Experiments" view, while the status in the experiment specific view shows "State: booted (startup services are still running)". When the My Experiments page show status as "ready" you are good to go...)

Validating - Connectivity

To make sure the experiment was set up correctly, we'll need to check the connectivity among nodes using ping. eNodeB, SGW, PGW, and EPC-enablers should be able to ping each other. This requires you to log into (ssh into) each of the nodes.

The easiest way to ssh into the nodes in your experiment is via the "shell tabs" from experiment page on the portal. (This seems to work best with the Chrome browser. See "Miscellaneous Notes" below for alternative ways to access your nodes.) On an experiment's page on the PhantomNet portal, click on the "List View" tab towards the bottom of the page. For each node, click on the corresponding icon in the "Actions" column and select "Shell". This will open another tab which will allow you to ssh into your node. (It might take a couple of seconds for the tab to appear and to ssh into the node.) Do this for each of the nodes in your experiment.

Once you have ssh'ed into each node, use the FQDN to verify basic connectivity. E.g., from the enb1 node find the FQDN


Use the listed domain name for your experiment to verify connectivity to other nodes in your experiment. E.g., running ping on the enb1 node:

ping -c 1

Should produce output similar to this:

PING ( 56(84) bytes of data.
64 bytes from ( icmp_req=1 ttl=64 time=0.825 ms

--- ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.825/0.825/0.825/0.000 ms

Note that the UE so far is not connected to the network and the Internet as it has not attached to the EPC network. After an attaching procedure (described below), the UE will connect to the eNodeB and be able to ping the other nodes and the Internet.

Validating - Setup

Making sure connectivity exist among nodes is not enough; EPC software has to be running on nodes to make the whole thing work. To make sure the correct EPC software is running on the corresponding node, you can log into the node and check for any OpenEPC wharf services:


Running the above command on each Emulab node should return something like what is shown in Table 1.

Emulab nodeEPC logical instances
client1 mm
enb1 enodeb
sgw hnbgw, mme, sgsn, sgw
pgw pgw

aaa, andsf, bf, cdf ,cgf, hss, icscf,

pcrf, pcscf, pcscf.pcc, scscf, squid rx client

Table 1. OpenEPC instances in each Emulab node

OpenEPC instances can be started and stopped by running the appropriate start, stop or kill scripts.

To start an instance run:

sudo /opt/OpenEPC/bin/[element_name]

E.g., to start the PGW on the pgw node run:

sudo /opt/OpenEPC/bin/

To gracefully stop an instance run:

sudo /opt/OpenEPC/bin/[element_name]

E.g., to stop the PGW on the pgw node run:

sudo /opt/OpenEPC/bin/

To (ungracefully) stop an instance run:

sudo /opt/OpenEPC/bin/[element_name]

E.g., to kill the PGW on the pgw node run:

sudo /opt/OpenEPC/bin/

Validating - Functionality

Now, as all the EPC nodes are set, we'll try to attach the UE to the network and then access a website in the Internet using the UE. Note that this profile uses an "emulated Radio Access Network (RAN)" between the UE and the eNodeB that mimics a RAN. You can invoke an attachment procedure of the UE by either using terminal commands or a GUI. You'll need an X session up and running on your local machine if you want to use the GUI attachment method below. (See below.)

To attach the UE to the network using command line.

You can attach to the Mobility Manager (mm) OpenEPC component running on the UE by issuing the following command:


This will attach you to the mm component's console.  After connecting, you can list the available networks with:


The return information should show that the UE is currently "layer-2 connected". It will show "layer-3 connected" status when an UE is connected to the Internet. Let's try to attach ("layer-3 connecting") the UE to the LTE/EPC network:

mm.connect_l3 LTE

To disconnect network, you can use below command.

mm.disconnect_l3 LTE

You can see more options by typing “help“.

Attaching the UE to the network using the GUI

You can also use a simple GUI to attach the UE to the network as an alternative to the text-based console

To attach the "client1" UE to the EPC, you'll first need to connect to the Emulab node hosting this UE logical instance. To do this, first SSH to the PhantomNet "users" control node ( and then ssh to your "client1" UE node:

yourhost> ssh -Y
... MOTD ...
you@users> ssh -Y client1.<your_experiment>.<your_project>

See the notes below for the reasons behind this indirect access. The "-Y" flag is necessary to forward along the connection to your X session. Another way to ensure X is forwarded along from the host to the UE is to add the following lines to  your ~/.ssh/config file on (just create the file if it isn't there):

ForwardAgent yes
ForwardX11 yes
ForwardX11Trusted yes
StrictHostKeyChecking no
Protocol 2

(Strictly speaking, only the "X11" entries are needed for X session forwarding, but the others are good to have as well.)

Once logged into "client1", start up the OpenEPC mobility manager GUI by running "/opt/OpenEPC/mm_gui/". This will bring up a window that looks like this:


Click on the "LTE" button with the red icon in the left hand column. You should see status messages go by on the text console, and the red icon should turn green. The line at the top should now read "mm-state-attached":



  • If you click the LTE button again, it should disconnect client1 (and the icon should turn red once more). You can either close the mobility manager GUI client, or minimize it.  Closing the GUI will not disconnect the UE from the EPC.
  • Sometimes the GUI does not properly show the MM's state after clicking on the toggle button (e.g., it still shows red (detached)).  There is a refresh button in the lower left-hand corner of the GUI window that you can click to update the display.

Test Internet Access

Once the UE has successfully connected to the LTE network via either of the above approaches, you should be able to access the Internet from your UE node.

From your ssh shell into the "client1" node, try pinging out to a host on the Internet, use wget to fetch something, and use internet browser after downloading browser, etc. Traceroute will show you that the connection jumps through some internal connections in the EPC network.

When the "client1" node is connected, you should be able to see the associated session from the OpenEPC web interface by navigating as follows:

"E-UTRAN" top tab -> "EnodeB" sub-tab -> "Sessions" left-hand menu item -> Click "Search" button on form (no params) -> click the only listed session's IMSI (should be "001011234567890").

Miscellaneous Notes


  • Accessing the nodes in your experiment

In addition to accessing the nodes in your experiment via the shell tabs as explained above, you can ssh to them from your local machine.

For the following nodes you can ssh into the node directly from the PhantomNet web UI: epc, sgw, enb1. Simply click on the "List View" tab in your experiment view. This will show a list of the nodes in your experiment, including a "SSH command" column. For these three nodes, clicking on its corresponding "SSH command" link will open up a terminal window and ssh into the node. (This assumes that you have correctly set up your public/private key pair when you first created your PhantomNet account.) Alternatively, you can manually open a terminal window and copy and past the corresponding "SSH command" to log into the node.

Because of the network setup between the OpenEPC nodes you cannot directly access the UE (node client1) or the PGW (node pgw) via the PhantomNet/Emulab control network. To reach these to nodes, first ssh to the PhantomNet "users" control node ( and then ssh to the client1/pgw nodes. In order to do this you will also need to find the fully qualified domain name (FQDN) for each of these nodes. You can do that as described above by ssh'ing into one of the other nodes, e.g., using the ssh tabs or the SSH commands information, and using "hostname" to determine the FQDN. Alternatively this information is available from the "Manifest" for the experiment: Click on the "Manifest" tab in the experiment view and search for lines like (assuming you named your experiment "myfirstexperiment"):

host name=""
host name=""

Use these FQDNs to ssh into the client1 and endb1 nodes via For example log into





  • OpenEPC logical instance (service) consoles

You can have a look at the individual OpenEPC component consoles by going to the respective node hosting the component of interest (e.g., the MME), and typing "/opt/OpenEPC/bin/<component>".

E.g., to connect to the console/CLI of the PGW element on the pgw node run:


To disconnect from a console/CLI session:

Ctl-a d

Use the command shown above on this page to see the list of active EPC services. These consoles are particularly useful for debugging.

OpenEPC console - mme


  • Useful console/CLI commands


In any console session type "help" to see the CLI options. Type "help [command]" to see more information about a particular command.

The CLI of most elements has a "bindings.print" type command which is useful to see the current state in a network element.

E.g., in the pgw console/CLI type:


to see current GTP tunnel information. (After a successful UE connect/attach this command should show details of the GTP tunnel terminating on the PGW. A subsequent disconnect/detach should result in the tunnel information being removed.

  • Rebooting OpenEPC nodes

The PhantomNet harness that configures and starts the appropriate OpenEPC services based on node role (see above), is tied into the end of the boot process.  So, rebooting any node should ultimately trigger OpenEPC service startup. You can reboot individual nodes by clicking on the node in the "Topology View" tab, or clicking on the "Actions" icon for the node in the "List View" tab. Note that OpenEPC components are quite tolerant of service restarts and temporary outages (reconnecting when possible).


  • Indirect connectivity to mobile clients and the P-GW


You have to "hop" through the "users" PhantomNet infrastructure host to get to some of the OpenEPC nodes (all mobile client nodes and the "pgw"). This is because these nodes do not have a default route out to the Internet. However, once clients are connected to the OpenEPC instance (via an eNodeB gateway), they should be able to get out via the EPC egress proxy (the "epc" node).  The PGW will remain directly inaccessible, as required for it to function and route traffic between the rest of the EPC Core network and the Internet gateway.


  • OpenEPC configuration

Note that this functionality is only available when OpenEPC is instantiated on physical Emulab nodes.

You can view and modify much of the EPC configuration in OpenEPC by pointing a web browser at your "epc" node, which should be named something like: epc.<your_experiment>.<your_project>


(Use username: "admin" and password: "epc").

Each OpenEPC logical instance (service) also has its own XML configuration file, which is located under /opt/OpenEPC/etc.