Skip to content. | Skip to navigation

Personal tools

Navigation

You are here: Home / Wiki / PhantomNet / OEPC-Protected / Customizing OpenEPC configuration

Customizing OpenEPC configuration

Make OpenEPC dance for you!

Introduction

PhantomNet provides some canned OpenEPC configurations, with minor configuration options accessible at the NS input file level.  In addition to this, you can configure OpenEPC functions directly using your own XML configuration files.  PhantomNet provides a handful of utility scripts (described below) to:

  • Validate and install your custom OpenEPC XML configuration files.
  • Start a OpenEPC "wharf" instance, targeting your custom configuration.
  • Stop/kill a custom OpenEPC "wharf" instance.
  • Attach to a running OpenEPC "wharf" instance's console.

You might wonder: Why the fuss?  Why can't you just start wharf yourself using a custom config directly?  The indirection is necessary due to the proprietary nature of the OpenEPC software, and our license agreement with Fraunhofer.  Since most wharf components need root access to function, PhantomNet must vet and manipulate the running instances on your behalf.  This is part of our due diligence to comply with the OpenEPC licensing terms.

What is all this about "wharf"?

Wharf is the name given to the binary EPC environment provided by OpenEPC.  It is also the name of the top-level wrapper binary that assembles the various modules together to form an EPC function.  This binary provides core services such as memory managment, IPC, process tracking, etc.  Wharf will load a set of binary objects, as specified in the XML configuration file, submitting their individual configuration fragments to them.

XML Configuration Files

You can compose pretty much any OpenEPC wharf configuration you want to and use it with PhantomNet.  There are only a handful of modules and configuration directives that either can't be used, or have arbitrary values (see "blacklisted/restricted components below").  A sample XML file for the MME function under OpenEPC can be found here:

Note the set of modules assembled and configured in this configuration file:

  • core parameters - These are parameters that apply to the top-level wharf binary.
  • console.so - Virtually all wharf configurations will include the console.  This module provides the console user interface (and other connection points).
  • cdp.so - Most wharf assemblies also contain this module, which provides Diameter protocol services (auth).
  • cdp_avp.so - Auxilliary module for decoding CDP messages (present if cdp.so is present).
  • mysql.so - Usually present, provides MySQL client services
  • Client_S6ad.so - S6a/S6d signalling module
  • addressing.so - Generic multi-point addressing module (e.g., to support multiple S-GW functions).
  • sctp.so - SCTP protocol wrapper.  Present whenever signalling protocols are used.
  • s1ap.so - S1AP signalling protocol module.
  • nas.so - Non Access Stratum functionality module.
  • mme.so - Core MME functionality.
  • gtp.so - GTP tunnel support.

We provide an example scenario involving custom configuration files here (XXX: link in).  The OpenEPC documentation can give you an idea of what components are available, and what you can do with them.  That documentation also contains additional configuration examples.

The utility scripts

This section describes the scripts that perform each of the utility functions mentioned in the Introduction.  All of these scripts can be found in /opt/OpenEPC/bin on any node running an OpenEPC-capable image (e.g., UBUNTU12-64-BINOEPC).  Most of these scripts must be run via "sudo".

 Script Name Description Use Sudo?
check_oepc_config.pl Validate and optionally install custom configuration file. Yes
run_wharf.pl Start up OpenEPC against an installed/validated custom config file. Yes
kill_wharf.pl Kill (optionally with prejudice) a running custom OpenEPC service. Yes
attach_wharf.pl Attach to the console of a custom OpenEPC service, or list available consoles. No

 

 

 

 

Each of the scripts listed above is described in more detail in the subsections that follow.

check_oepc_config.pl

This script is the gateway for installing a custom configuration.  It's job is primarily to enforce the security policy we are beholden to as licensees of Fraunhofer FOKUS for OpenEPC. Secondarily it does some light sanity checking of the input configuration.

Synopsis: 

check_oepc_config.pl [-i] <input_config_file> [<output_config_file>]

 E.g.: /opt/OpenEPC/etc/emulab/check_oepc_config.pl -i my_mme_config.xml mme.xml

 -i: Install the config file. Must supply <output_config_file>
 <output_config_file> is a file name (not a path) relative to /opt/OpenEPC/etc

check_oepc_config.pl makes sure certain aspects of user-supplied configuration can't provide elevated (root) access. Recall that attempting to do so is a violation of the OpenEPC sublicense agreement that all users of PhantomNet who use OpenEPC are required to sign. A key restriction is that individual module configuration must appear within the top-level configuration file.  The checker will not allow a configuration to "include" another file to configure a module.  In other words, all configuration directives must be inline in the same monolithic configuration file.

When the "-i" (install) option is supplied, check_oepc_config.pl installs the validated configuration file as the filename indicated by <output_config_file> in the /opt/OpenEPC/etc directory.  Once there it can be referenced by the run_wharf.pl script (see below).  If the configuration file does not pass validation, it will not be installed.

Note: Run this script with "sudo"

run_wharf.pl

This script will start wharf against a custom configuration file.  It only looks for these configuration files in the /opt/OpenEPC/etc directory, which can only be written to by the check_oepc_config.pl script.

Synopsis:

Usage: /opt/OpenEPC/etc/emulab/run_wharf.pl <config_file_name>
        E.g.: /opt/OpenEPC/etc/emulab/run_wharf.pl someconf
        Script will search /opt/OpenEPC/etc for the config file, and add the suffix '.xml' to the name supplied.

 

As shown in the synopsis, only the base name of the configuration file needs to be supplied (without the .xml suffix).

Note: Run this script with "sudo"

kill_wharf.pl

The kill_wharf.pl script exists to stop (optionally with prejudice) a running custom wharf service.

Synopsis:

Usage:
        /opt/OpenEPC/etc/emulab/kill_wharf.pl [-f] <screen_name>
        /opt/OpenEPC/etc/emulab/kill_wharf.pl -l

        E.g.: /opt/OpenEPC/etc/emulab/kill_wharf.pl my-mme
        -f: Force mode - Immediately destroy with SIGKILL.
        -l: List mode - print list of valid wharf session names.

Without the "-f" option, kill_wharf.pl will try a few times to stop the top-level wharf process with SIGTERM before resorting to SIGKILL. When the "-l" option is given, kill_wharf.pl doesn't actually terminate any running wharf session, but rather lists all those it can find.

Note: Run this script with "sudo"

attach_wharf.pl

This script is the custom counterpart to the *.attach.sh scripts for standard OpenEPC services.  It will connect you to the console of a running custom wharf service instance.

Synopsis:

Usage:
        /opt/OpenEPC/etc/emulab/attach_wharf.pl [-f] <screen_name>
        /opt/OpenEPC/etc/emulab/attach_wharf.pl -l

        E.g.: /opt/OpenEPC/etc/emulab/attach_wharf.pl my-mme
        -f: Force mode - first detach the screen if currently attached.
        -l: List mode - print list of valid screen session names.

The attach_wharf.pl script interprets its arguments similarly to the kill_wharf.pl script. It also has "list" and "force" modes.

Pre- and Post-hook Scripts

PhantomNet provides a way for you to tie in custom startup scripts before, and after the main body of the startup script executes.  These are called pre-hook and post-hook scripts, respectively. They can be any valid script/executable.  You might use these to, e.g., install your custom configuration files, and start up a wharf instance against them.  A couple of notes about these scripts

  • pre- and post-hook scripts are run every time the node boots

Therfore, if you want run-once semantics, you'll have to use a flag file, or some other state-tracking mechanism in your script/command's logic.

  • pre- and post-hook scripts are run as the swapper user.

That is, as the user that swapped in the experiment.  Not as the root user.

You tie in your pre- and post-hook scripts via the NS file for your experiment.  In the reference PhantomNet NS files, the "epcnode" function takes up to four arguments.  Synopsis:

epcnode <node_name> [<epc_role>] [<hostname>] [<pre_hook_script>] [<post_hook_script>]

Interpretation of each argument:

  • node_name - this will become the hostname of the node, and is what you will see when you look at the list of nodes in your experiment.
  • epc_role - This argument should be one from the list of predefined roles available in PhantomNet/OpenEPC:
    • epc-enablers - where common services such as HSS, AAA, PCRF run.  Also acts as a NAT gateway and web proxy for your topology.
    • sgw-mme-sgsn - as the name implies, a node with this role runs these three services: S-GW, MME, and SGSN.
    • pgw - Runs the PDN-GW service.
    • enodeb - Runs an emulated eNodeB service instance.  The canned PhantomNet scripts allow you to add up to three of these.
    • epc-client - Runs an emulated UE.  Currently PhantomNet allows you to ask for one or two UEs (via count variable in template NS files). Hostname argument is required.
    • any - This is a pseudo-role.  The PhantomNet startup scripts will prepare the node (e.g., setup DB schema) such that any service can run on it.
  • hostname - the epc-role specific hostname.  Most roles don't require this argument.  It can be specified for the "enodeb" role and MUST be specified for "epc-client" nodes.
  • pre_hook_script - If present, this command will be run before the main body of PhantomNet's OpenEPC startup scripts.
  • post_hook_script - If present, this command will be run after the main body of PhantomNet's OpenEPC startup scripts.

​Note that any argument, aside from node_name, can be '{}', which is "NULL" in TCL.  If the epc_role argument is NULL, then the NS file will NOT setup a startup command automatically for you. This is useful if you just want to define the node to load the OS image with the OpenEPC bits (as is done for other nodes with actual roles), but plan to control service startup and such yourself.  A NULL hostname will result in the startup scripts using a default name (when needed, and where possible).  A NULL pre- or post-hook script argument means no pre- or post-hook script.

The "epcnode" function in the NS file essentially passes the last four arguments to the PhantomNet startup wrapper (/opt/OpenEPC/bin/start_epc.sh), which in turn sets them up as parameters for the real startup script (/opt/OpenEPC/etc/emulab/epc_svc_control.pl).

Putting It All Together

You have a great idea, one (or more) custom wharf XML configuration files to set things up as you want them, but now what?  How do you actually use the above tools to instantiate your modified EPC?  Here are the high level steps:

  • Stage your custom configuration file(s).
  • Write a pre-hook script to install your custom XML configuration file.
  • (Optional) Write a post-hook script to start your custom OpenEPC (wharf) service.
  • Tie your pre/post-hook scripts into your PhantomNet NS file
  • Profit.

Stage your custom configuration file(s)

Place your customized wharf XML configuration files somewhere they can be accessed/referenced by your pre-hook script.  A good choice would be:

  • /proj/<your_project_name/<your_uid>/somedir

Create directories in the above path as needed (if they don't exist).  Your configuration files should have a ".xml" suffix.

Write a pre-hook script to start your custom OpenEPC service.

The PhantomNet startup script will call into user-supplied pre/post-hook scripts, if supplied. You can do whatever you like in this script, but a key reason to use this facility is to install your custom wharf configuration file(s).  Inside your script, use the check_oepc_config.pl script (via sudo) to validate and install your config.  See the usage instructions elsewhere in this document for more info on using the check_oepc_config.pl script.

Notes: You need to stage your pre-hook script in a location accessible to your node.  As with your custom XML config files, this could be in "/proj/<your_project>/<your_username>/somedir/". Note that the hook scripts are run every time a node boots! This means that if you want the hook script to have run-once semantics, you'll have to make use of a flag file or other stateful check to accomplish such behavior. Also, be aware that the hook scripts run as the swapper user; that is, as the user that swapped in the experiment.

An example pre-hook script can be found here.

(Optional) Write a post-hook script to start your custom OpenEPC service.

OK, so the post-hook script can do whatever you like, but is probably most useful for starting up OpenEPC wharf against your custom configuration. A minimal script might simply call "/opt/OpenEPC/run_wharf.pl <your_config_name>".

The pre-hook script notes apply here as well!

An example post-hook script can be found here.

Tie your pre/post-hook scripts into your PhantomNet NS file.

The prior subsections on pre/post hooks already mentioned staging these scripts in an accessible location.  Simply refer to them by absolute path in the parameters to 'epcnode', as discussed in the "Pre- and Post-hook Scripts" section of this document.

Profit!

Swap in your now-staged and prepared experiment and (hopefully) watch the magic happen!  If something goes wrong, take a look at the following log file for clues:

/opt/OpenEPC/log/oepc-services.log

Note that this is where any output your pre/post hook scripts produce will go.  We also recommend setting up the logfile directive in the "core" section of the XML configuration file if you are having trouble with your custom OpenEPC configuration.  Setting the level to "debug" produces a lot of output, but can be useful in determining what went wrong inside the wharf environment.

Configuration Restrictions

There are a few restrictions on modules and allowed configuration directives that can appear in your custom wharf XML config files.  These are enumerated here.

  • No external includes

Wharf allows you to reference external configuration files for individual components.  We disallow this because there is no easy way for us to police the contents of these files.  In other words, all of your configuration directives - for all modules - must appear inline in one big monolithic configuration file.

  • Blacklisted wharf modules

We blacklist the following modules because their configuration directives are sufficiently loose so as to allow trivial root privilege escalation:

mm_3gpp
mm_wlan
cdr_file
ANDSF
CGF_FTP
SEQN_LOG

If you find that you critically need any of these components, feel free to make a case for enhancing the checker to allow them on the phantomnet-users Google Group forum.

  • Individual module restrictions
    • mm.so - Xpath: MM/NetworkList/@file - Can only reference the PhantomNet provided/generated NetworkList file at /opt/OpenEPC/etc/mm_network.xml
    • flowmon.so - Xpath: WharfFlowMon/tcpdump/@node_id_append - allowed characters restricted to alphanumberic, plus underscore, dash, and dot.
    • ofs.so - Xpath: OpenFlowSwitch/Config/@dumpfile - dumpfile directory must be /var/tmp, and filename is restricted to alphanumeric, plus underscore, dash, and dot.
    • routing.so - Xpath: WharfROUTING/Extension/@mod_name - routing submodule names restricted to those beginning with 'routing_'
    • routing_ofp.so - Xpath: WharfROUTING/Extension/@mod_name - same restrictions as routing.so

Note that "Xpath" in the above descriptions refers to the Xpath search string that would find the corresponding configuration attribute/element in the module's configuration.  These should be fairly obvious if you are delving into this part of the configuration.