Containernet is a fork of the famous Mininet network emulator and allows to use Docker containers as hosts in emulated network topologies. This enables interesting functionalities to build networking/cloud emulators and testbeds. One example for this is the NFV multi-PoP infrastructure emulator which was created by the SONATA project and later adopted by the OpenSource MANO (OSM) project.
Containernet in action
Cite this work
If you use Containernet for your work, please cite the following publication:
- M. Peuster, H. Karl, and S. v. Rossem: MeDICINE: Rapid Prototyping of Production-Ready Network Services in Multi-PoP Environments. IEEE Conference on Network Function Virtualization and Software Defined Networks (NFV-SDN), Palo Alto, CA, USA, pp. 148-153. doi: 10.1109/NFV-SDN.2016.7919490. (2016)
Using Containernet is very similar to using Mininet with custom topologies.
Create a custom topology
To start, a Python-based network topology description has to be created as shown in the following example:
""" Example topology with two containers (d1, d2), two switches, and one controller: - (c)- | | (d1) - (s1) - (s2) - (d2) """ from mininet.net import Containernet from mininet.node import Controller from mininet.cli import CLI from mininet.link import TCLink from mininet.log import info, setLogLevel setLogLevel('info') net = Containernet(controller=Controller) info('*** Adding controller\n') net.addController('c0') info('*** Adding docker containers using ubuntu:trusty images\n') d1 = net.addDocker('d1', ip='10.0.0.251', dimage="ubuntu:trusty") d2 = net.addDocker('d2', ip='10.0.0.252', dimage="ubuntu:trusty") info('*** Adding switches\n') s1 = net.addSwitch('s1') s2 = net.addSwitch('s2') info('*** Creating links\n') net.addLink(d1, s1) net.addLink(s1, s2, cls=TCLink, delay='100ms', bw=1) net.addLink(s2, d2) info('*** Starting network\n') net.start() info('*** Testing connectivity\n') net.ping([d1, d2]) info('*** Running CLI\n') CLI(net) info('*** Stopping network') net.stop()
You can find this topology in
Run emulation and interact with containers
Containernet requires root access to configure the emulated network described by the topology script:
sudo python containernet_example.py
After launching the emulated network, you can interact with the involved containers through Mininet’s interactive CLI as shown with the
ping command in the following example:
containernet> d1 ping -c3 d2 PING 10.0.0.252 (10.0.0.252) 56(84) bytes of data. 64 bytes from 10.0.0.252: icmp_seq=1 ttl=64 time=200 ms 64 bytes from 10.0.0.252: icmp_seq=2 ttl=64 time=200 ms 64 bytes from 10.0.0.252: icmp_seq=3 ttl=64 time=200 ms --- 10.0.0.252 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2002ms rtt min/avg/max/mdev = 200.162/200.316/200.621/0.424 ms containernet>
To stop the emulation, do:
Containernet comes with three installation and deployment options.
Option 1: Bare-metal installation
Automatic installation is provided through an Ansible playbook. Requires: Ubuntu 16.04 LTS.
sudo apt-get install ansible git aptitude git clone https://github.com/containernet/containernet.git cd containernet/ansible sudo ansible-playbook -i "localhost," -c local install.yml
Option 2: Nested Docker deployment
Containernet can be executed within a privileged Docker container (nested container deployment). There is also a pre-build Docker image available on DockerHub.
# build the container locally docker build -t containernet .
# or pull the latest pre-build container docker pull containernet/containernet
# run the container docker run --name containernet -it --rm --privileged --pid='host' -v /var/run/docker.sock:/var/run/docker.sock containernet /bin/bash
Option 3: Vagrant-based VM creation
Using the provided Vagrantfile is the another way to run and test Containernet:
vagrant up vagrant ssh
Containernet has been used for a variety of research tasks and networking projects. If you use Containernet, let us know.
M. Peuster, H. Karl, and S. v. Rossem: MeDICINE: Rapid Prototyping of Production-Ready Network Services in Multi-PoP Environments. IEEE Conference on Network Function Virtualization and Software Defined Networks (NFV-SDN), Palo Alto, CA, USA, pp. 148-153. doi: 10.1109/NFV-SDN.2016.7919490. (2016)
S. v. Rossem, W. Tavernier, M. Peuster, D. Colle, M. Pickavet and P. Demeester: Monitoring and debugging using an SDK for NFV-powered telecom applications. IEEE Conference on Network Function Virtualization and Software Defined Networks (NFV-SDN), Palo Alto, CA, USA, Demo Session. (2016)
M. Peuster, H. Karl: Understand Your Chains: Towards Performance Profile-based Network Service Management. Accepted in Fifth European Workshop on Software Defined Networks (EWSDN). IEEE. (2016)
Qiao, Yuansong, et al. Doopnet: An emulator for network performance analysis of Hadoop clusters using Docker and Mininet. Computers and Communication (ISCC), 2016 IEEE Symposium on. IEEE, 2016.
M. Peuster, S. Dräxler, H. Razzaghi, S. v. Rossem, W. Tavernier and H. Karl: A Flexible Multi-PoP Infrastructure Emulator for Carrier-grade MANO Systems. In IEEE 3rd Conference on Network Softwarization (NetSoft) Demo Track . (2017) Best demo award!
M. Peuster and H. Karl: Profile Your Chains, Not Functions: Automated Network Service Profiling in DevOps Environments. IEEE Conference on Network Function Virtualization and Software Defined Networks (NFV-SDN), Berlin, Germany. (2017)
If you have any questions, please use GitHub’s issue system or Containernet’s Gitter channel to get in touch.
Your contributions are very welcome! Please fork the GitHub repository and create a pull request. We use Travis-CI to automatically test new commits.