Since v1.5 the MidoNet Java Agent exposes many counters and metrics via JMX. Initially, these were related to the Agent’s workload and performance. However, since v1.7 MidoNet has support for overlay and underlay traffic meters. There four kinds of meters:
- device, to track packets/bytes arriving at a bridge or router
- port, to track packets/bytes transmitted-from/received-at a device port
- tunnel, to track packets/bytes tunneled from a source host to a destination host
- user defined, track packets/bytes of flows matching conditions defined by the user
The first three don’t need to be configured, they are provided by default. MN Agent tracks them by default on all corresponding objects. The user defined meters need to be configured in rules (we’ll leave that for a future post).
Each Agent exposes its locally tracked meters via JMX. Remember that an Agent’s local meters only count packets/bytes that ingress locally, so usually it’s necessary to sum across all Agents in a deployment to get a meaningful count. That’s the case for Tenant Routers, for example.
mm-meter is a tool installed by the midolman package that allows you to query a single Agent’s meters. The following is example output for its list command. Notice that this Agent only tracks meters for two ports, they are two VM-facing ports on the same overlay bridge in my test deployment. This agent only sees packets transmitted from c040e61d and received at 64594d49 because the former is bound at a remote Agent and the latter is bound locally. Also, notice that the tunnel source and destination IP addresses are integers. This should probably be fixed in MidoNet itself, but for this post I fixed it in the MN Agent scraper.
# mm-meter -h 18.104.22.168 -p 7200 list meters:device:4fc20423-5e56-4cea-a4f3-cdf02917e7a3 meters:port:tx:c040e61d-8329-49b6-8a3a-a11b834fb2c5 meters:device:405fedd8-f03e-418d-8aff-36c843eb481f meters:device:64594d49-cae0-4e21-8472-7d5c1fc30dc5 meters:tunnel:-1062731735:-1062731745 meters:device:0e561fb2-6130-4443-8a69-a6270c89727e meters:tunnel:-1062731735:-1062731737 meters:device:c040e61d-8329-49b6-8a3a-a11b834fb2c5 meters:device:c5590f1f-2ce0-4d60-a4cc-d8e27d01dff7 meters:device:daa86f7c-8701-49ee-ad1f-b52f8fd6af36 meters:device:7303fdb8-03ad-4da6-8e10-3274ead3c018 meters:port:rx:64594d49-cae0-4e21-8472-7d5c1fc30dc5 meters:user:null meters:device:c6cd63c0-aab4-491e-8f7a-45b3637e0cdc meters:device:a11abdd3-8d0b-478f-8911-5259531c1c16
You can query the packet and byte counters for a single meter. The following is example output for the get command. The last two argument specify 4 updates at 60 second intervals. You can see that the first stats show the total counters, afterwards only deltas are printed.
# mm-meter -h 22.214.171.124 -p 7200 get -n meters:port:tx:c040e61d-8329-49b6-8a3a-a1834fb2c5 60 4 packets bytes 6390 268380 10 420 31 1126 3 126
Prometheus is a recent Time Series Database by SoundCloud – it’s simply fantastic. In this post I’m going to help you set up Prometheus to collect, aggregate, and store MidoNet meters.
Start by installing Prometheus. You can follow their Installing and Getting Started documentation (kudos to SoundCloud for providing excellent documentation), but I provide short instructions here.
- First you need to install Docker. The Ubuntu install docs suggest: wget -qO- https://get.docker.com/ | sh
- Now launch the Prometheus container: docker run -p 9090:9090 prom/prometheus
- The Prometheus container is configured to monitor itself. Verify that Prometheus is running correctly by pointing your browser to http://<your-host-ip>:9090/. Scroll down to the Targets section, you should see one healthy target: http://localhost:9090/metrics.
Prometheus pulls metrics over HTTP. Since MN Agents only serve metrics via JMX, we’re going to need the JMX to Prometheus bridge. However, I’ve had to hack it a bit so that:
- it can read the MN Agent’s MeteringMXBean. This bean has 2 methods: String listMeters() and FlowStats getMeter(String name). In contrast, Prometheus’ JMX Exporter expects one bean per metric.
- it can scrape multiple MN Agents.
You can look at the patch for these hacks in my fork of the JMX Exporter.
Running the JMX Scraper
- Download the tar-file of the modified JMX Exporter here. It contains the following:
- JMX Exporter jar
- MidoNet jars containing the MeteringMXBean and FlowStats classes
- Shell script for running the JMX Exporter (run_mn_scraper.sh)
- JMX Exporter configuration file (mn_scraper_config.json) with MN Agent IP addresses, and rules matching only MidoNet’s device, port and tunnel metrics. Later we can add rules to expose the MN Agent’s other metrics.
- Prometheus configuration file that defines a job for pulling metrics from the JMX Exporter.
- Prometheus rules file with rule definitions to pre-aggregate device and port meters across all MN Agents.
- tar -zxvf prom_mn_jmx_scraper.tar.gz
- cd prom_mn_jmx_scraper
- edit mn_scraper_config.json to provide the comma-separated list of IP addresses of all the MN Agents you want to scrape. For example: “hostPort”: “126.96.36.199:7200,188.8.131.52:7200”
- Launch run_mn_scraper.sh
- The scraper runs an httpserver on port 7201. Edit the script if you want to change the port.
- The JMX Exporter (when run as an http server as we’re doing) scrapes its target each time its URL is queried. Verify the Exporter is working correctly by pointing your browser to http://<your-host-ip>:7201/metrics. You should at least see the following at the end of the response:
# HELP jmx_scrape_duration_seconds Time this JMX scrape took, in seconds. # TYPE jmx_scrape_duration_seconds gauge jmx_scrape_duration_seconds 3.414254924 # HELP jmx_scrape_error Non-zero if this scrape failed. # TYPE jmx_scrape_error gauge jmx_scrape_error 0.0
- Now verify that your MN Agents are being scraped. The response should include lines like the following, where “mn_agent” shows the IP of your Agent. You should be able to find all the IPs you listed in mn_scraper_config.json:
Now we’re ready to reconfigure Prometheus to pull metrics from our JMX Exporter and to run the aggregation rules specified in our file.
- If necessary, change the target address specified in prometheus.conf. Since Prometheus is running in a container, while the JMX Exporter runs on the host, Prometheus must send queries to the host’s address on the docker0 bridge (the container’s “NetworkMode” = “bridge”). On my host, ifconfig shows that docker0 has address 172.17.42.1, and that’s why the file prometheus.conf uses that address.
- docker run -d —name prom -p 9090:9090 -v /full/host/path/to/prometheus.conf:/prometheus.conf -v /full/host/path/to/prometheus.rules:/prometheus.rules prom/prometheus
- Reload Prometheus’ main page: http://<your-host-ip>:9090/.
- Verify that the “Targets” includes the JMX Exporter’s address. In my case it’s http://172.17.42.1:7201/metrics
- Verify that the section “Rules” contains the rules specified in the file prometheus.rules
mndev_bytes_1m = sum(delta(mndev_bytes[1m])) by (mn_id) mndev_pkts_1m = sum(delta(mndev_pkts[1m])) by (mn_id) mnport_tx_bytes_1m = sum(delta(mnport_tx_bytes[1m])) by (mn_id) mnport_rx_bytes_1m = sum(delta(mnport_rx_bytes[1m])) by (mn_id) mnport_tx_pkts_1m = sum(delta(mnport_tx_pkts[1m])) by (mn_id) mnport_rx_pkts_1m = sum(delta(mnport_rx_pkts[1m])) by (mn_id) mntun_bytes_1m = sum(delta(mntun_bytes[1m])) by (src) mntun_pkts_1m = sum(delta(mntun_pkts[1m])) by (src)
Now you can go to the “Graph” section of Prometheus’ dashboard http://184.108.40.206:9090/graph and you can graph the pre-aggregated metrics created by the rules. For example the following identifies the metric containing the bytes arriving in a given minute at the device with UUID 7303fdb8…