Over the past few months, I’ve been working on Bee2. It’s a provisioning system for automating the process of building, running and maintaining my own websites and web applications. In previous tutorials, I had gone over provisioning servers using APIs, configuring those servers for remote Docker administration via a VPN, and automating LetsEncrypt and HAProxy in containers. Bee2 has gotten mature enough that I’ve finally migrated many of my production sites and web applications to it, including this website. In this post, I’ll go over some of the additional challenges I faced with IPv6, and refactoring containers to allow live HAProxy refreshes.

More IPv6 Woes

When enabling IPv6 in Docker, the Docker Engine requires an IPv6 subnet that it uses to assign each container a public IPv6 address. Some providers give each server a fully routable /64 IPv6 network. Although Vultr does provide a /64 subnet to each server, these subnets are not fully routed1. In these situations, the default gateway will send a neighbor solicitation request to incoming packets bound for that subnet, and the Docker host will need to run an NDP Proxy Daemon to automatically respond to these requests2.

I created the following Ansible role to install and setup the ndppd daemon:

- name: Install ndppd
  action: apt name=ndppd

- name: Configure ndppd
  template: src=ndppd.conf.j2 dest= owner=root group=root mode=0644

- name: Enable proxy_ndp for public adapter
  sysctl: name=net.ipv6.conf.ens3.proxy_ndp value=1 state=present

- name: Restart ndppd Service
  service: name=ndppd state=restarted

The Ansible template for the ndppd.conf configuration file is fairly straightforward as well, using the same /80 subnet we previously assigned to the Docker daemon:

proxy ens3 {
  timeout 500
  ttl 30000
  rule /80 {

With this configuration, my containers could receive incoming requests from exposed Docker ports, and they could send/receive ICMP requests over IPv6 (ping6) to and from the outside world. However, they could not establish TCP connections over IPv6 to any public address. This particular issue stumped me for a while, and I eventually discovered that my ufw firewall needed the default forward policy to be set to ACCEPT, in order for IPv6 TCP connections to be established in both directions. I added the following task to my firewall role in Ansible to to set the default forward policy:

- name: set UFW default forward policy to ACCEPT
    dest: /etc/default/ufw
    regexp: "^DEFAULT_FORWARD_POLICY\\="

This brings us to the last IPv6 issue I encountered, and which required the most refactoring of Bee2. HAProxy has configuration options to inject an X-Forwarded-For header to the backend servers, indicating the real IP address a web browser is connecting from. For IPv4 connections, this header was getting populated with a real public IP address from the Internet. However for IPv6 connections, this header would contain a 172.17.x.x address from the Docker bridge adapter. It turns out that for IPv4, exposed ports are mapped to the listening container and translated in such a way that the container does get the actual public IP of whatever is connecting to it. However since containers are given an actual public IPv6 address, incoming requests to the host’s IPv6 address are converted to IPv4 in a translation layer!

The end result is that all IPv6 connections to my website will seem to be coming from one IPv4 address, that of a private Docker address, making any type of log analysis impossible. I could use the public IPv6 address of the container itself, opening firewall rules as appropriate and adjusting DNS records, but in my original implementation I used Docker’s default bridge for networking. It’s impossible to assign a static IPv6 address on the default bridge.

The default bridge network, as well as container linking, are not recommended for production. I avoided user defined networks in the past, but with this limitation, I had to finally bite the bullet and setup my networking correctly. First I split my /80 subnet into two /96 ranges. One would be assigned to the default bridge network (which would go unused, and which must exist because it cannot be deleted in Docker), and the other would go to my user defined network. I added a section in settings to allow for three additional Docker/IPv6 settings, a suffix for the bridge network, a suffix for the user defined network and the trailing part of the static IP to use for the HAProxy/web load balancer.

    plan: 202 # 2048 MB RAM,40 GB SSD,2.00 TB BW
    os: 241 # Ubuntu 17.04 x64
        suffix_bridge: 1:0:0/96
        suffix_net: 2:0:0/96
        static_web: 2:0:a

The Vultr provisioner uses these settings to establish a static web IPv6 address which is maintained in the Bee2 state file:

def web_ipv6 { |name, s|
  }.each { |name,cfg|
    if not @state['servers'][name].has_key?('static_web')
      ipv6 = @state['servers'][name]['ipv6']['subnet'] + cfg['ipv6']['docker']['static_web']"Creating IPv6 Web IP #{ipv6} for #{name}")
      @state['servers'][name]['ipv6']['static_web'] = ipv6

I adjusted Bee2 to check if a user defined network exists and, if not, create it using the second /96 defined by suffix_net, before running any additional Docker commands.

def establish_network(server)
  ipv6_subet = state['servers'][server]['ipv6']['subnet']
  ipv6_suffix = @config['servers'][server]['ipv6']['docker']['suffix_net']
  ipv6 = "#{ipv6_subet}#{ipv6_suffix}"

  if { |n|['Name'] == @network }.empty?"Creating network #{@network} with IPv6 Subnet #{ipv6}")
    Docker::Network.create(@network, {"EnableIPv6" => true,
      "IPAM" => {"Config" => [
        {"Subnet" => ipv6}

Previously, the following code in the create_containers function in docker.rb was used to link containers defined by the links in the configuration file:

'HostConfig' => {
  'Links' => ({ |l| "#{@prefix}-#{cprefix}-#{l}" } if not link.nil?),
  'Binds' => (volumes if not volumes.nil?),
  'PortBindings' => ( { |port| {
    "#{port}/tcp" => [{ 'HostPort' => "#{port}" }]}
  }.inject(:merge) if not ports.nil?)

Linking containers in Docker has been deprecated, so I removed the configuration option for defining links and instead connected each container to the user defined network mentioned previously. The following code also allows for setting a static IPv6 address for a container if it has static_ipv6 defined:

'NetworkingConfig' =>
  {'EndpointsConfig' =>
    {@network =>
        'IPAMConfig' => {'IPv6Address' => static_ipv6 }.reject{ |k,v| v.nil? }
'ExposedPorts' => ( { |port| {"#{port}/tcp" => {}}}.inject(:merge) if not ports.nil?),
'HostConfig' => {
  'Binds' => (volumes if not volumes.nil?),

Changes were also made in vultr.rb to use the static web address for all AAAA (IPv6) DNS records, as well as the Ansible firewall rules to allow incoming 80/443 requests to that IPv6 address as well. This completes the loop and allows native IPv6 and translated IPv4 address to reach the HAProxy container and pass on the original client-IP to the backend web services. This is necessary for tools such as log analyzers like AWStats.

HAProxy refreshing and Job containers

Previously, I had extended the official HAProxy container to run a script which generated the HAProxy configuration file. With this setup, I’d have to rebuild the HAProxy container for any configuration changes. The point of HAProxy is to be highly available, hence its name. So instead I created the concept of job containers and moved the HAProxy configuration to a job. The directory containing the configuration file is shared between the hasetup job container and the haproxy app container. The docker socket is also shared with the job container so hasetup can send a kill/reload signal to HAProxy to force it to reload the updated configuration file.

    server: web1
    build_dir: HAProxySetup
      haproxy_container: $haproxy
      certbot_container: $certbot
      awstats_container: $awstats
      domains: all
      - letsencrypt:/etc/letsencrypt:rw
      - haproxycfg:/etc/haproxy:rw
      - /var/run/docker.sock:/var/run/docker.sock

Job containers can be used for several other things, such as regenerating static web sites or configuring database containers like so:

    server: web1
      - rvm-web:/www/build:rw
    server: web1
    build_dir: DBSetup
      database_json: _dbmap
      mysql_host: $mysql
      postgres_host: $postgres

Job containers can be run using the run subcommand on a docker server like so:

./bee2 -c conf/settings.yml -d web1:run:hasetup

The container is not deleted once the task has completed, and can be run again with docker start. However running the job using ./bee2 will cause the container to be deleted and recreated before it is run. So long as a job container has been run once and exists in a stopped state on the Docker host, it can be scheduled to run at regular intervals using the JobScheduler container. The + symbol is used to reference a job container and scheduling is handled using cron syntax.

  server: melissa
  build_dir: JobScheduler
    run_logrotate: +logrotate
    when_logrotate: 1 2 */2 * *
    run_awstats: +awstats-generate
    when_awstats: 10 */12 * * *
    run_mastodoncleaner: +mastodon-remote-media-cleanup
    when_mastodoncleaner: 5 3 * * *
    - /var/run/docker.sock:/var/run/docker.sock

Closing Remarks

Dealing with IPv6 has certainly been one of the more challenging aspects of working with Docker, Bee2 and containers. It wasn’t until I finished the implementation I described that I discovered an IPv6 NAT daemon for Docker that would have given me the same flexibility with IPv6 that I had with IPv4. I am glad I implemented clean configuration and restarts for HAProxy as well. The next installment in this series will cover automated database setup and configuration for some basic Docker applications.

  1. Docker on VULTR + IPv6. 3 March 2016. Gravi. Tianon’s Ramblings. 

  2. ndppd on Vultr to enable fully routed /64 for IPv6. 12 September 2014. IOPSL’s.