If you have not used Swarm, skim the non-service-discovery tutorial to get a feel for how it works:
https://blog.vpetkov.net/2015/12/07/docker-swarm-tutorial-and-examples. It’s very easy, and it should give you an idea of how it works within a couple of minutes.

Using Swarm with pre-generated static tokens is useful, but there are many benefits to using a service discovery backend. For example, you can utilize network overlays and have common “bridges” that span multiple hosts (https://docs.docker.com/engine/userguide/networking/get-started-overlay/). It also provides service registration and discovery for the Docker containers launched into the Swarm. Now lets get into how to use it with service discovery – which is what you would use in a scaled out environment/production.

Again, assuming you have a bunch of servers running docker:
vm01 (10.0.0.101), vm02 (10.0.0.102), vm03 (10.0.0.103), vm04 (10.0.0.104)

Normally, you can do “docker ps” on each host for example:
ssh vm01 ‘docker ps’
ssh vm04 ‘docker ps’

If you enable the API for remote bind on each host you can manage them from a central place:
docker -H tcp://vm01:2375 ps
docker -H tcp://vm04:2375 ps
(note: port is optional for default)

But if you want to use all of these docker engines as a cluster, you need Swarm.
Here we will go one step further and use a common service discovery backend (Consul).

Docker Swarm Tutorial with Consul and How-To/Examples

A swarm contains only two components: agents (the workers in the cluster) and manager(s).
We are also going to add consul (the service discovery backend).

First, grab the swarm and the consul images on each docker host:

Then, make sure the API is enabled for remote bind on each host (NOTE: see bellow if using Systemd OS):

Don’t panic here! It looks complicated, but it’s actually incredibly easy.

The consulIp in “–cluster-store=consul://consulIp:8500” is the docker host that will run the consul service (much like the swarm manager). Since you will map the port to the docker host itself, that’s simply the IP of the docker host (in our case – vm01)

The managerIp in “–cluster-advertise=managerIp:2376” is the docker host that will run the swarm manager service. Since you will map the port to the docker host itself, that’s simply the IP of the docker host (in our case – vm01).

To get everything started, go to whatever docker host you pick as the manager (in our case vm01), and create the consul server:

Now, on *each* AGENT (including the manager if you want to use it as a worker) run:
docker run -d swarm join –addr 107.170.73.43:2375 consul://consulIp/swarm:8500/swarm

You would do this for *each* agent and in our case vm01 is also an agent.

At last, you need to run a manager service on your chosen manager host (in our case, vm01) to manage the swarm:
docker run -d -p 2376:2375 swarm manage consul://consulIp:8500/swarm

The idea is that the manager wants to provide an API on port 2375. We are binding that to the local host on 2376. If your manager is NOT an agent, you can simply bind it on 2375 by doing a “run -d -P swarm manage consul://…”. In that case, you would NOT run the “swarm join” command on your manager. However, in our case we want all of the hosts to be agents, including the manager.

The last step is to query the cluster:
docker -H tcp://managerIP:2376 info

In our case, we use vm01:

Again, if your manager is NOT an agent, you would simply run:
“docker -H tcp://managerIP:2375 info” or even “docker -H tcp://managerip”

Don’t forget to start the manager on reboot, and each join on the agents on reboot.

9 Thoughts on “Docker Swarm Tutorial with Consul (Service Discovery) and Examples

  1. ibolcina on June 8, 2016 at 4:07 am said:

    Hi.Thanks for tutorial.

    I’am trying to ping machines by name

    I create network

    docker -H tcp://10.0.1.101:2376 network create my_swarm_net

    then I start “u1”, “u2″” containers
    docker -H tcp://10.0.1.101:2376 run -ti –name u1 –net my_swarm_net ubuntu bash

    docker -H tcp://10.0.1.101:2376 run -ti –name u2 –net my_swarm_net ubuntu bash

    they get addresses like “10.0.0.2” ,”10.0.0.3″
    Host names are correctly resolved,

    but ping (from u2->u1) returns

    root@2ef627851fd7:/# ping u1
    PING u1 (10.0.0.2) 56(84) bytes of data.
    From 2ef627851fd7 (10.0.0.3) icmp_seq=1 Destination Host Unreachable
    From 2ef627851fd7 (10.0.0.3) icmp_seq=2 Destination Host Unreachable

  2. Ventz on June 9, 2016 at 4:00 am said:

    You would have to first expose the DNS ports in Consul (-p 8600:53/udp) and then point your docker container’s DNS server to the consul server’s IP. (better yet, add “–dns-search service.consul” to your DOCKER_OPS” — since it seems it has recursion enabled and it’s using google’s servers)

    At this point each docker container you start will register it’s “–name” into consul. This will be accessible via:
    $name.service.consul. If you add that to the DNS search domain, you can simply use the $name at that point.

    I haven’t tried this myself yet because for me Weave (mentioned this earlier) is simply the perfect solution.
    You might need a service like “registrator” (another docker container), which monitors the unix socket and then registers container names/ips/data into consul. Ex:

    Give it a try and let me know.

    I still think using something like Weave is the way to go for the time being. It solves the cross-datacenter/cross-clouds problem, and it gives you a bunch of freebies on the process.

  3. ibolcina on June 9, 2016 at 3:36 pm said:

    I agree. Weave is great. I just hope it will be accepted into community.

  4. Ventz on July 31, 2016 at 1:30 am said:

    Found something interesting which is relevant to what we were talking about:
    https://github.com/docker/docker/blob/master/docs/userguide/networking/work-with-networks.md

    Look at this section: “Network-scoped alias”

    This looks like the exact solution you want.

    The limitation (it doesn’t list, but I am assuming) is multiple hosts and container across those. However, between the overlay networking, and the new macvlans which came out in Docker 1.12, you can fix that now easily too — either an overlay on top of swarm over multiple hosts, or having multiple hosts just plug into the same vlan with macvlan and 802.1q.

  5. Bernardo Corrêa on November 20, 2016 at 10:50 pm said:

    Hi, thanks for the post.

    Maybe it is too late for asking but running docker -H :2376 info shows detailed info about the swarm manager. I found the output strange cause I’ve noticed that it has the following info:

    Swarm:
    NodeID:
    Is Manager: false
    Node Address:

    If I run -H :2376 node ls, i get : Error response from daemon: 404 page not found.

    Also, automatic load balancing does not work neither regular swarm commands like node ls or service create/ls/tasks.

    What am I missing?

    Regards.

  6. This is for the old “swarm engine”. It was extremely confusing and I kept seeing people wanting “simple tutorials”.
    Now that Docker’s 1.12 “swarm mode” is out, you should use that. Here is their official tutorial: https://docs.docker.com/engine/swarm/swarm-tutorial/

  7. Bernardo Corrêa on November 21, 2016 at 4:13 pm said:

    Hi, thanks for the reply,

    I could not really see any difference from your post, except when using docker hub`s token mode. The docs are not very clear on how to create the swarm using an external discovery backend like consul.

    Thanks anyway,

  8. The new one is completely different and incredibly simple.
    It has discovery (ex: consul) built in, and you no longer have to worry about scaling discovery services or having redundancy — it “just works”.
    Each “engine” can be a manager/master or a regular node. It can be promoted and demoted. It also comes with a bunch of other nice features like auto load balancing, services, etc.
    The only downside I’ve noticed so far is that it really doesn’t make sense to have 2 “nodes” as a manager and non-manager. You really want an odd # of nodes (to deal with contention), and the minimum number is 3.

    Here’s an article that compares the two types of docker swarm.
    https://www.infoq.com/news/2016/06/dockercon-docker-swarm

    Good luck.

  9. Bernardo Corrêa on November 21, 2016 at 8:30 pm said:

    Yes, sure, I got that running in no time, it is really easy. But I thought it wasnt ready for production use yet. I got a lot of problems in the old swarm and the overlay network. Lot of containers went stale and there were no way to disconnect them from the network,compose sometimes just wouldnt find the network, it was a real mess.

    I’ll give it a shot with swarm mode and leaveConsul just for general service discovery and health checks.

    Thanks again.

Leave a Reply

Your email address will not be published. Required fields are marked *

Post Navigation