Docker – the DockerFile

In the last post Link, I talked about Docker very very briefly and set up a series of containers to play around with it. In this post we will take a look at the DockerFile, Docker Hub and set up a RabbitMQ (AMQP Message broker) with a DockerFile and push it to a public repository for the world’s use.


Docker can act as a builder and read instructions from a plain text file called Dockerfile to automate the steps you would otherwise take manually to create an image.  Basically what we will do is Create a DockerFile and execute docker build to create an image from which we will create a container afterwards. The docker file will look like this:

FROM ubuntu:trusty

MAINTAINER Marc Lopez Rubio <>

RUN apt-get -qq update && apt-get install -qq -y rabbitmq-server

RUN rabbitmq-plugins enable rabbitmq_management

ADD rabbitmq.config /etc/rabbitmq/rabbitmq.config

EXPOSE 5672 15672

ENTRYPOINT /usr/sbin/rabbitmq-server

I want to highlight that it’s usage is very very similar to what you would execute on the shell with a INSTRUCTION in front of the command, there are various INSTRUCTION we can use in the Dockerfile:

  • FROM: Used to tell docker which base image to use, in this case we used the Ubuntu trusty image (14.04) (IMAGE:TAG format)
  • MAINTAINER: Used to set the author of the Docker file
  • RUN: Used to run commands with the Linux shell (/bin/sh -c)
  • ADD: used to add a file from a URL or Path to the specified path
  • EXPOSE: Used to expose a listening container port to the host
  • ENTRYPOINT: Used to use that container as an executable, there can only be one ENTRYPOINT on a Docker file, if there’s more than one the last one will be used.
  • ENV: Used to set Environment variables.
  • USER: Used to set the user from which to run the RUN instructions and executing the ENTRYPOINT, etc.
  • WORKDIR: Used to set the workdir for the RUN, ENTRYPOINT and CMD instructions
  • #: To indicate that it’s a comment just as in Bash scripts.

Hands on

First you’ll have to create a Docker Hub account (if you haven’t already). Then we will create a folder on which we will set up our Docker projects in my case it is /home/marc/docker/rabbitmq, and create a file called “Dockerfile”, then we’ll create the RabbitMQ file

vi Dockerfile
vi rabbitmq.config

The Dockerfile will contain:

FROM ubuntu:trusty

MAINTAINER Marc Lopez Rubio <>

RUN apt-get -qq update && apt-get install -qq -y rabbitmq-server

RUN rabbitmq-plugins enable rabbitmq_management

ADD rabbitmq.config /etc/rabbitmq/rabbitmq.config

EXPOSE 5672 15672

ENTRYPOINT /usr/sbin/rabbitmq-server

The rabbitmq.config will contain:

{rabbit, [
{default_user, <<“admin”>>},
{default_pass, <<“adminpass”>>},

Now we have our Working directory with all the necessary elements for us to work with. The above Dockerfile will:

  • Use the Ubuntu 14.04 LTS
  • Install the rabbitmq server
  • Enable the management UI
  • Copy our RabbitMQ Configuration
  • Expose the 5672 and 15672 Ports to the host IP
  • Set the container ENTRYPOINT to the rabbitmq-server executable which will start the RabbitMQ Server.

Now we build the image, run it and test it 🙂

marc@ubuntu-docker:~/docker/rabbitmq$ docker build -t marclop/rabbitmq .
Sending build context to Docker daemon 3.584 kB
Sending build context to Docker daemon

Removing intermediate container ea2fee4f0701
Successfully built aa7ad24e132c
marc@ubuntu-docker:~/docker/rabbitmq$ docker images
REPOSITORY               TAG                 IMAGE ID                 CREATED                 VIRTUAL SIZE
marclop/rabbitmq    latest              aa7ad24e132c        57 seconds ago      318.3 MB

marc@ubuntu-docker:~/docker/rabbitmq$ docker run -d -p 5672:5672 -p 15672:15672 marclop/rabbitmq
CONTAINER ID      IMAGE                                         COMMAND                     CREATED                STATUS                  PORTS                                                                                                      NAMES
b947c82ac58f        marclop/rabbitmq:latest   /bin/sh -c /usr/sbin   2 minutes ago       Up 2 minutes>5672/tcp,>15672/tcp   compassionate_goodall

We used the -d flag with the Docker RUN command to tell Docker that this container will be run in the background (dettached). We will check the logs to see if everything went well.

marc@ubuntu-docker:~/docker/rabbitmq$ docker logs b947c82ac58f

RabbitMQ 3.2.4. Copyright (C) 2007-2013 GoPivotal, Inc.
##  ##                  Licensed under the MPL.  See
##  ##
##########  Logs: /var/log/rabbitmq/rabbit@b947c82ac58f.log
######      ##              /var/log/rabbitmq/rabbit@b947c82ac58f-sasl.log

Starting broker… completed with 6 plugins.

Seems to be working all right, we will point our browser to the WebUI to see if it is in fact running.

RabbitMQ Web


That did work well. Now let’s publish our shiny RabbitMQ Ubuntu image to the Docker Hub for everyone’s use.

Publishing the image

First we’ll have to log in to our previously created Docker Hub account with the docker login command and then push the image to our repo.

marc@ubuntu-docker:~/docker/rabbitmq$ docker login
[sudo] password for marc:
Username (marclop):
Login Succeeded
marc@ubuntu-docker:~/docker/rabbitmq$ docker push marclop/rabbitmq
The push refers to a repository [marclop/rabbitmq] (len: 1)
Sending image list
Pushing repository marclop/rabbitmq (1 tags)
Pushing tag for rev [aa7ad24e132c] on {}

That’s it! We have our image in a public repository available for the entire world, you can also create a description for the repository, link an existing Github repository to a Docker one, and create automatic triggers to auto-build a Docker image when a commit is made to the Github repository which is much much cooler.

Some more bits and pieces

I wanted to include some nice stuff about docker. Here’s the output of the Process hirearchy:

 marc@ubuntu-docker:~$ ps fax

12047 ?        Ssl    2:37 /usr/bin/docker -d
16003 ?        Ss     0:00  \_ /bin/sh -c /usr/sbin/rabbitmq-server
16019 ?        S      0:00      \_ /bin/sh /usr/sbin/rabbitmq-server
16027 ?        S      0:00      |   \_ su rabbitmq -s /bin/sh -c /usr/lib/rabbitmq/bin/rabbitmq-server
16028 ?        Ss     0:00     |       \_ sh -c /usr/lib/rabbitmq/bin/rabbitmq-server
16029 ?        Sl     0:05      |           \_ /usr/lib/erlang/erts-5.10.4/bin/beam -W w -K true -A30 -P 1048576 — -root /usr/lib/erlang -progname erl — -home /var/lib/rabbitmq — -pa /usr/lib/rabbitmq/lib/rab
16101 ?        Ss     0:00      |               \_ inet_gethost 4
16113 ?        S       0:00      |                   \_ inet_gethost 4
16045 ?        S      0:00      \_ /usr/lib/erlang/erts-5.10.4/bin/epmd -daemon

Also, because I am very curious I wanted to see how long it took to spin up our new container

marc@ubuntu-docker:~$ time docker run -d -p 5672:5672 -p 15672:15672 marclop/rabbitmq

real 0.063s

It took just 0.06s!!!!! 0.06s… no way you can match that with a VM 🙂 Docker is blazing fast.


I hope you liked this post and keep reading our blog, it is much appreciated.


Docker Series Index:

  1. Docker Introduction
  2. Docker – the DockerFile

Introduction to Docker

In recent days the Docker project is gaining momentum, and at the time I’m writing this article their popularity and growth seems to be growing in an exponential manner. In this article I’ll try to be as brief as possible without leaving out any important details about Docker and the Docker project.

The Docker project is based on the idea that shipping code and applications to servers shouldn’t be hard, should be easier than it is today. Usually because you have so many environments and they usually aren’t homogeneous and that can create a lot of problems. Instead the whole idea is that everything is carried inside containers and that the environment doesn’t matter because everything is self-contained (OS, App, software versions, dependencies, everything).


They (Docker guys) always make the analways show us a Matrix of hell on which all the products are not carried into containers.

So docker is used to standarize the shipping of your app and code to the end servers.

Docker provides a set of tools (Docker engine and Docker hub) to manage and create LXC (Linux Containers) have been used for years in Linux and work very well, the only “problem” for LXC is that there are not very “Administrator friendly” so Docker gives us a nicer interface and API to interact with them. So Docker is not reinventing the wheel nor creating any revolutionary technology, but what the Docker guys did is give us LXC in a nicer package and a very nice community behind them :), enough chattering and let’s cut to the chase.

Docker is a lightweight and powerful open source container virtualization technology combined with a work flow for building and containerizing your applications based on LXC. Linux containers have been around for some years, but the whole idea of Linux containers is to create an environment as close as possible as a standard Linux installation but without the need for a separate kernel. LXC is often considered as something in the middle between a chroot on steroids and a full fledged virtual machine. but enough of LXC, this post is to talk about Docker.

Docker is made from:

  • Docker Engine (The actual software)
  • Docker Hub (The community and index SaaS platform to create and share containers which is very very cool).

I’ll quote some bits and pieces from here, honestly because the Docker guys did a very good job on their documentation so I don’t see the point on re-writing what’s already written (See what I did there? 🙂 ).

Docker uses a client-server architecture. The Docker client talks to the Docker daemon, which does the heavy lifting of building, running, and distributing your Docker containers. Both the Docker client and the daemon can run on the same system, or you can connect a Docker client to a remote Docker daemon. The Docker client and service communicate via sockets or through a RESTful API.

Docker Architecture Diagram

So, after looking at the diagram, we can see that the host may just run a Linux OS with the Docker daemon and a set of containers. The Docker daemon doesn’t do anything by itself, it needs a Docker client sending operations to it. The docker daemon pulls existing images from the Docker index so you can re-use what’s already made and go straight to the problem you’re trying to solve instead of spending hours and hours installing OS, and applications inside the OS, etc. Also, Docker is FAST, it spin up containers almost instantly (once you have built them of course).

Inside Docker there are some parts we need to know before diving in:

  1. Docker images
  2. Docker registries
  3. Docker containers

A docker image is template-alike, in the sense that we only use it as a read-only reference for building our container. Images are used to create Docker containers, for example we can use an existing Docker image of Ubuntu which contains the Ubuntu OS and an Apache web server inside. Docker provides a very easy CLI for creating (BUILD), downloading (PULL), uploading (PUSH)… images to Docker Registries. These registries hold images and can be public or private. The public Docker registry is called Docker Hub, and already contains a HUGE amount of images created by contributors for our use for free (isn’t that cool?). Last but not least Docker Containers are similar to a VM in the sense that they run isolated from the Host, and are controlled by the Docker Daemon. Docker containers are created from images and uploaded or downloaded to/from the Docker Registry either public or private. That’s it.

How does Docker actually work?

The basic workflow would be: Building an image which holds your application, create a container from that image, and publish it in the Docker Hub or your own private registry.

If we look at the design:

So if we run 4 Ubuntu containers which contain similar Libraries and Binaries, Docker will do a better job because it will share the common Libs/Bins and use less resources.

I won’t dig any deeper into the Docker design, because I don’t see any point in doing that at this stage, there’s A LOT of information about Docker and LXC on the internet, and I am not an expert in either of those.

OK, how do I install it?

Installing docker is very very easy. I’d like to install the latest version so I chose the method:

marc@ubuntu-docker:~$ curl -s | sudo sh

which will install Docker on Ubuntu 14.04 LTS:

marc@ubuntu-docker:~$ sudo docker version
[sudo] password for marc:
Client version: 1.0.1
Client API version: 1.12
Go version (client): go1.2.1
Git commit (client): 990021a
Server version: 1.0.1
Server API version: 1.12
Go version (server): go1.2.1
Git commit (server): 990021a

We’ll run an existing Ubuntu container:

marc@ubuntu-docker:~$sudo docker run -i -t ubuntu:latest /bin/bash
root@c79dd9737655:/# ps
1 ? 00:00:00 bash
10 ? 00:00:00 ps

As we can see there are just 2 processes running, bash and ps, we’re using the run command to tell the docker daemon:

  • it needs to run the latest ubuntu image
  • run the bash command
  • -i for interactive.
  • -t for tty alike output.

If we open another terminal and type: sudo docker ps:

marc@ubuntu-docker:~$ sudo docker ps
CONTAINER ID         IMAGE                  COMMAND      CREATED                STATUS                PORTS      NAMES
f96145b294f1           ubuntu:14.04     /bin/bash       12 minutes ago      Up 12 minutes                       pensive_pike

we see the container with its properties. If I remove the /var directory INSIDE the container and run sudo docker diff c79dd9737655:

marc@ubuntu-docker:~$ sudo docker diff f96145b294f1
D /var

We can see that the differences from the base image, in this case the /var directory has been deleted. Let’s say we want to commit the changes we made in our running container, list the image, and create a new container from the image we created:

marc@ubuntu-docker:~$ sudo docker commit f96145b294f1 marc/varless-ubuntu

marc@ubuntu-docker:~$ sudo docker images
REPOSITORY                    TAG      IMAGE ID              CREATED                        VIRTUAL SIZE
marc/varless-ubuntu    latest    2663d7b29f53   About a minute ago    275.5 MB

marc@ubuntu-docker:~$ sudo docker run -i -t marc/varless-ubuntu bash


Since specifying sudo is quite frankly a pain in the *ss lets build add an alias to our bash profile 🙂

echo “alias docker=’sudo docker'”>> .bashrc

source .bashrc

marc@ubuntu-docker:~$ docker
[sudo] password for marc:


now that we have have everything in place, let’s create an Ubuntu container with netcat listening to port 8080

marc@ubuntu-docker:~$ docker run -p :8080 -i -t ubuntu bash
root@e0b691d06473:/# apt-get install netcat -y

Setting up netcat (1.10-40) …
root@e0b691d06473:/# netcat -l 8080


marc@ubuntu-docker:~$ docker port e0b691d06473 8080
marc@ubuntu-docker:~$ curl localhost:49153


root@e0b691d06473:/# netcat -l 8080
GET / HTTP/1.1
User-Agent: curl/7.35.0
Host: localhost:49153
Accept: */*


We receive the curl request in the netcat listening process. If we created the the container with the -p option to 8080:8080 the container 8080 port would’ve been mapped to the 8080 port instead of being mapped to a random port. We’ll also use the -rm option (–rm now) to automatically delete the container when it exits.

marc@ubuntu-docker:~$ docker run -rm -p 8080:8080 -i -t ubuntu bash
Warning: ‘-rm’ is deprecated, it will be replaced by ‘–rm’ soon. See usage.

marc@ubuntu-docker:~$ docker port 06b064df2b6b 8080

To see the stopped containers and their exit status we can use docker ps -a, and since we’ll have a lot of them we want to remove them because we did not do anything of relevance:

marc@ubuntu-docker:~$ docker rm $(docker ps -aq)


I’ll do a series of posts on Docker, honestly, because it’s very very cool and it a lot of playing around 🙂 . In the next post I’ll introduce some more concepts as well as examples of how to use Dockerfiles.

You can read more about it on


Docker Series Index:

  1. Docker Introduction
  2. Docker – the DockerFile