Docker Networking sets sail with MidoNet

by Tim Fall tim@midokura.com and Antoni Puimedon

Introduction

So we can all agree that Docker is cool. Containers are cool, repeatability is cool, portability is cool, we’re all cool. Everything is unicorns and rainbows. But something is missing in this fairy tale land, and it’s something we all like to forget about. With this wide world of other containers and services out there (“world wide web” anyone?), we want to make use of these things and connect ourselves in.

That’s it! We forgot networking!

Current State of Affairs

Well that’s not quite fair, we didn’t forget it per say, we just all got caught up in building cool containers and plumb left it for later.

Docker Networking

Out of the box, docker gives you four networking modes that cover a lot of the container usages that have propelled docker into popularity. However, they all lack multi-host and advanced network capabilities but offer different degrees of Isolation. They are:

NAT Bridge

This is the default networking docker option and, in this case, the default is almost invariably the setting everybody uses.

It provides network namespace isolation, communication between containers in the same host and by leveraging iptables, it allows ports in the address space of the host.
depiction of NAT bridge networking that shows how veths and bridges are used to provide networking

Host

Under this networking setting, the containers are spawned in the same networking namespace in which the docker daemon is running. This allows the containers to see exactly the same networking as the Host.

It is easy to see that using this mode means that you should trust the container that runs, because it is capable of negatively impacting your networking configurations. It is important to note that if one were to build containers for doing network plumbing, Host networking would be really useful, as one could make an image of a daemon that has dependencies and then let it work on the base network namespace.

Container

It allows multiple containers to communicate with each other over the loopback device, since they share the same networking namespace.

depiction of the container based configuration, like Kubernetes pods

If you are familiar with Kubernetes, you will notice that when two containers share the networking namespace, the picture you get is very similar to one of the Pods in Kubernetes hosting containers.

None

A container with nothing in its networking namespace except the loopback device. This can prove useful when having multiple non-networking container tasks that one does not want to build a namespace and infrastructure for.

Flannel and Weave

Some overlay solutions building on top of the NAT bridge model like Flannel and Weave are starting to emerge. Flannel, for instance, will set up a /24 subnetting for each host and have them be routable with each other. The packets travel between containers in different hosts thanks to the tunneling that Flannel sets up.

depiction of the a typical flannel configuration

In order to make things more convenient, Flannel prepares a configuration file for the docker daemon that is picked up by the docker daemon at start up and will communicate to docker which address space the docker Linux Bridge has and make it to be used to put the veths when launching containers.

This was a big step as it finally allowed for cross-host communication in an effective and simple way. While this will cover many cross-host use cases in Dev & Test, solutions leveraging Open vSwitch will be insufficient for production deployment because they suffering from design limitations inherent in the technology.

libnetwork (“*vendor training not required”)

libnetwork is a new project from Docker designed to bring full-fledged networking to containers and make them a first class citizen. With the aquisition of SocketPlane.io, an experienced team of networking experts at Docker has been hard at work making networking a reality.

libnetwork‘s integration to Docker, which will come in the coming weeks, is backwards compatible and, at the same time, adds new actions to the docker api/cli like docker network create. Then, when running a container, it will be possible to specify which network should be used.

With the emergence of libnetwork, vendors no longer have to rely on wrapping the docker API/cli like weave and others do. The libnetwork project is designed to create a framework, that will live alongside other core frameworks in Docker (libcontainer,Compose, Machine, Registry, and Kinematic) and provide support for networking options. This will primarily take the form of a set of APIs against which people can create container-based options for a range of networking solutions. libnetwork will ensure much better integration, simpler deployment and more responsiveness in network performance. You can check out more about how libnetwork works on the blog post and on the repository.

MidoNet and the libnetwork Future

MidoNet

MidoNet is an open source, distributed network virtualization software. The MidoNet project has already been integrated with a number of different open source projects, including OpenStack and OPNFV. Therefore, it seemed like a natural fit to be the first member to leverage libnetwork.

MidoNet uses an agent to manage connections between containers (intra and inter-host) and creates point to point tunnels for passage of traffic. The network topology and all systems data are stored in a clustered database system, built on open source technology like Cassandra and Zookeeper, which are directly accessible via a number of API endpoints.

Layout of overlay networks in MidoNet

For more detailed information on MidoNet see midonet.org and midonet.github.com.

Container Networking

Prior to libnetwork

Before the introduction of libnetwork MidoNet relied on the docker event interface to gather state information about running containers and to watch for new events. This approach worked, but did not provide the truly native support of a first-class citizen network.

the evented design shows how the actions originate in dockerd and midockerd reacts to those

Pros:

  • Lightweight. No need for complex listeners or loops
  • Simple. Uses the same tools and cli as normal docker
  • Functional. Enabled complex networking without touching docker core

Cons:

  • Reactionary event driven mechanism
  • One way street. No native mechanism for container awareness of network conditions
  • Additional tooling. Complex network changes required the use of the midonet cli to edit the network directly
With libnetwork

libnetwork allows for a mechanism driver to provide networking functions that core docker functions are aware of. A “plugin” framework allows for direct support of networking between containers and across hosts including functions like:

  • Tunneling
  • Load balancing
  • Cross host networking
  • And many more

There is also work being done on supporting cross-engine networking.

MidoNet will leverage this new model to support feature-rich and highly scalable fast networking. Rather than relying on “wrapping” a new set of tools around the existing Docker API and cli, MidoNet will now use libnetwork to provide a network “driver” service that advertises functions directly into the other Docker components. This will expand the networking capabilities of Docker, while also substantially simplifying interaction with networking drivers.

Get Started

The current working version of MidoNet with docker is part of a broader project to make MidoNet compatible with Docker, Swarm, Kubernetes, Mesosphere, and other distributed systems. You can find the project and instructions for how to run it on the project’s repository.

MidoNet Bees: MidoNet for Swarm, Kubernetes, and Mesosphere

Contribute!

Both libnetwork and MidoNet are open source. You can get the source code from the official repositories.

Join the conversation. Talk with developers, get help, and contribute.

  • #docker-dev and #docker-networking irc channels on freenode
  • MidoNet on Slack
 

4 thoughts on “Docker Networking sets sail with MidoNet

Leave a Reply

Your email address will not be published. Required fields are marked *