Introducing MidoNet Cluster services

In this post, we’ll introduce the new management services included with the next version of MidoNet (5.0.0) that will be released in late September 2015.

Those acquainted with the MidoNet architecture will already be familiar with the MidoNet Agent.  This daemon runs on hypervisors and gateways managing a set of datapath ports that reside physically in the same host as the daemon and are bound to virtual ports in the overlay topology.  A packet first ingressing these ports (typically from VMs) will be handed-off by the kernel to the MidoNet Agent so it can simulate the packet’s path through the overlay topology and install the adequate kernel flow to direct any further matching packets to its destination. MidoNet Agents collaborate to manage some distributed state (e.g. MAC and ARP tables in virtual devices, Routing tables, connection tracking or NAT mappings), but in practise most of the complexity involved distributed state management is delegated to well-established data stores such as Apache ZooKeeper.

Certain management functions aren’t bound to specific physical elements co-located with the MidoNet Agent. An example appears in our implementation of VxLAN Gateways, where we need to synchronize state among the MidoNet’s Network State DataBase (NSDB) and multiple VTEP-capable hardware switches.  Since VTEPs store state in local OpenVSwitch DataBase (OVSDB) instances, a primary task of our VxLAN Gateway controller is to synchronize several data stores.  This will hopeflly give a glimpse into the rabbit hole of constraints and complexity involved, and why it makes sense to move such control functions into a different type of node: the MidoNet Cluster.

Other good candidates for future Cluster services include controllers for underlay health checks or routing.  The MidoNet REST API is already implemented as a Cluster service, and we’ll soon provide ZooKeeper as an embedded service.  Both will simplify the MidoNet installation and bootstrapping experience by removing the need for users to install ZooKeeper and Tomcat as external dependencies.

In order to support this heterogeneity, the MidoNet Cluster includes a very thin execution framework to manage arbitrary services.  Developers are free to implement services as highly autonomous pieces of logic, which define their own strategies for redundancy, fail over, etc.

However heterogeneous, most services will also share basic infrastructure.  They need reliable access to the overlay configuration using the NSDB API (ZOOM), the ability to interact with other components and services, or the ability to expose its own metrics inside the JMX endpoint of its host Cluster daemon.   All this plumbing is provided by the MidoNet Cluster and available to all services with no added development effort.

On a MidoNet deployment, operators install the midonet-cluster package on those machines designated to run as Cluster nodes and every service included with the OSS packages will be immediately available on each of them.  The operator has the option to selectively enable or disable any of them.  As noted above, instances of the same service running on different Cluster nodes will use their own coordination strategies in order to carry out their designated tasks, so there will be minimal operational cost for operators.

An additional feature of the MidoNet Cluster is the possibility to write and deploy these services as add-ons.  This enables OSS contributors and proprietary organisations to develop new Cluster services and make them available in existing MidoNet deployments.

We will follow up with a new post demonstrating how developers can write a simple, yet fully functional MidoNet Cluster service.

 

One thought on “Introducing MidoNet Cluster services

Leave a Reply

Your email address will not be published. Required fields are marked *