Introduction to MN’s Overlay Network Models (Part 1 – Provider Router)

In this series of articles we’ll discuss how the overlay network is modeled in MidoNet. Although the concepts also apply to non-OpenStack setups, we’ll focus on OpenStack and point out how Neutron concepts relate to MidoNet concepts. In this post we focus on MidoNet’s Provider Router, in the Part 2 we’ll discuss Tenant Routers.

Note that all of Neutron’s models are native in MidoNet’s API, but MidoNet’s API also has some lower-level models. MidoNet’s agents understand all the lower-level models, but only some of the Neutron models, so MidoNet’s API translates some Neutron models to low-level models. The Neutron models and the low-level models are stored in Apache ZooKeeper and from there propagated to the MidoNet agents as needed.

Finally, as we discuss the overlay models, remember that the concepts don’t necessarily map 1-1 to physical concepts.

MidoNet’s Provider Router

A typical MidoNet deployment (certainly any MidoNet/OpenStack deployment) will have a single router which we call Provider Router. Don’t confuse this with Neutron’s “provider” concept. MidoNet’s Provider Router is an overlay (read: virtual/logical) router, owned by the cloud operator, that provides L3 connectivity between Tenants or between Tenants and the Internet.

In a typical deployment, the Provider Router has 3 uplinks. The Provider Router may have 3 ECMP default static routes, one for each of the uplink. Alternatively, BGP may be set up so that this router can dynamically learn uplink routes (and advertise its own).

The diagrams below show the difference between inter-networking Tenant Routers with MidoNet’s Provider Router vs. Neutron’s External Network. The External Network requires all Tenant Routers to be connected to the same L2 network and doesn’t have support for dynamic route learning and advertisement. In contrast, with MidoNet’s Provider Router:

  1. A flow from VM1 in Tenant A’s network to VM2 in Tenant B’s network doesn’t leave the overlay. Therefore MidoNet (and some other SDNs) can tunnel the flow directly from VM1’s host to VM2’s host.
  2. All external as well inter-tenant traffic passes through the Provider Router’s uplink ports, providing a well-defined set of points to apply traffic policy, and learn or advertise routes. Note that the Provider Router can have any number of uplinks.
Inter-tenant connectivity via Neutron External Network

Inter-tenant connectivity via Neutron External Network

Inter-tenant connectivity via MN Provider Router

Inter-tenant connectivity via MN Provider Router

In a typical MidoNet deployment, the Provider Router is the first logical device created (via API) after the software has been installed. The deployer/admin chooses 3 commodity servers, one for each Provider Router uplink. Each of these servers is referred to as a L3 Gateway Node. In production deployments L3 Gateway Nodes are entirely dedicated to processing the North-South traffic for one Provider Router uplink, while in test deployments the L3 Gateway Node may also be an OpenStack Compute host. Each L3 Gateway Node should have a NIC dedicated to the uplink traffic.

The diagram below shows how the Provider Router uplinks are mapped to physical NICs on commodity hosts that act as L3 Gateway Nodes. For best throughput and to minimize fate-sharing, the Gateway Nodes should be placed in different racks. Not every rack needs a Gateway Node. The number of Gateway Nodes depends on the required North-South bandwidth for the entire cloud.

MN_PR_UplinksToPhys

The dashed red line shows how the Provider Router Uplinks (in the Overlay/Logical layer in the bottom half) map to physical NICs on the commodity X86 servers in each rack that act as L3 Gateway Nodes.

So how does the Provider Router know about the operational state of its uplinks? Each uplink is explicitly bound (via an API call or CLI command) to a specific network interface on a host running the MN Agent. Assume Uplink1 is bound to eth0 on Host10. When the MN Agent on Host10 learns about the binding, it issues a call to the local datapath (e.g. a netlink call to the Open vSwitch kernel module datapath) to add the eth0 as a netdev port (the host IP network stack can no longer use eth0). When the new datapath port is correctly added to the datapath and if its operational state is UP, then the MN Agent will publish to Apache ZooKeeper that the virtual router port Uplink1 is UP and located on Host10. As a result of Uplink1 being UP, any route via Uplink1 (MN Route objects explicitly specify both their next hop gateway and their virtual router egress port) is added to the Provider Router’s forwarding table. These routes will be automatically removed if Uplink1 goes down or if the MN Agent on Host10 fails.

In Part 2 of this series we’ll discuss Tenant Routers.

 

5 thoughts on “Introduction to MN’s Overlay Network Models (Part 1 – Provider Router)

Leave a Reply

Your email address will not be published. Required fields are marked *