Introduction to Docker

In recent days the Docker project is gaining momentum, and at the time I’m writing this article their popularity and growth seems to be growing in an exponential manner. In this article I’ll try to be as brief as possible without leaving out any important details about Docker and the Docker project.

The Docker project is based on the idea that shipping code and applications to servers shouldn’t be hard, should be easier than it is today. Usually because you have so many environments and they usually aren’t homogeneous and that can create a lot of problems. Instead the whole idea is that everything is carried inside containers and that the environment doesn’t matter because everything is self-contained (OS, App, software versions, dependencies, everything).

 

They (Docker guys) always make the analways show us a Matrix of hell on which all the products are not carried into containers.

So docker is used to standarize the shipping of your app and code to the end servers.

Docker provides a set of tools (Docker engine and Docker hub) to manage and create LXC (Linux Containers) have been used for years in Linux and work very well, the only “problem” for LXC is that there are not very “Administrator friendly” so Docker gives us a nicer interface and API to interact with them. So Docker is not reinventing the wheel nor creating any revolutionary technology, but what the Docker guys did is give us LXC in a nicer package and a very nice community behind them🙂, enough chattering and let’s cut to the chase.

Docker is a lightweight and powerful open source container virtualization technology combined with a work flow for building and containerizing your applications based on LXC. Linux containers have been around for some years, but the whole idea of Linux containers is to create an environment as close as possible as a standard Linux installation but without the need for a separate kernel. LXC is often considered as something in the middle between a chroot on steroids and a full fledged virtual machine. but enough of LXC, this post is to talk about Docker.

Docker is made from:

  • Docker Engine (The actual software)
  • Docker Hub (The community and index SaaS platform to create and share containers which is very very cool).

I’ll quote some bits and pieces from here, honestly because the Docker guys did a very good job on their documentation so I don’t see the point on re-writing what’s already written (See what I did there?🙂 ).

Docker uses a client-server architecture. The Docker client talks to the Docker daemon, which does the heavy lifting of building, running, and distributing your Docker containers. Both the Docker client and the daemon can run on the same system, or you can connect a Docker client to a remote Docker daemon. The Docker client and service communicate via sockets or through a RESTful API.

Docker Architecture Diagram

So, after looking at the diagram, we can see that the host may just run a Linux OS with the Docker daemon and a set of containers. The Docker daemon doesn’t do anything by itself, it needs a Docker client sending operations to it. The docker daemon pulls existing images from the Docker index so you can re-use what’s already made and go straight to the problem you’re trying to solve instead of spending hours and hours installing OS, and applications inside the OS, etc. Also, Docker is FAST, it spin up containers almost instantly (once you have built them of course).

Inside Docker there are some parts we need to know before diving in:

  1. Docker images
  2. Docker registries
  3. Docker containers

A docker image is template-alike, in the sense that we only use it as a read-only reference for building our container. Images are used to create Docker containers, for example we can use an existing Docker image of Ubuntu which contains the Ubuntu OS and an Apache web server inside. Docker provides a very easy CLI for creating (BUILD), downloading (PULL), uploading (PUSH)… images to Docker Registries. These registries hold images and can be public or private. The public Docker registry is called Docker Hub, and already contains a HUGE amount of images created by contributors for our use for free (isn’t that cool?). Last but not least Docker Containers are similar to a VM in the sense that they run isolated from the Host, and are controlled by the Docker Daemon. Docker containers are created from images and uploaded or downloaded to/from the Docker Registry either public or private. That’s it.

How does Docker actually work?

The basic workflow would be: Building an image which holds your application, create a container from that image, and publish it in the Docker Hub or your own private registry.

If we look at the design:

So if we run 4 Ubuntu containers which contain similar Libraries and Binaries, Docker will do a better job because it will share the common Libs/Bins and use less resources.

I won’t dig any deeper into the Docker design, because I don’t see any point in doing that at this stage, there’s A LOT of information about Docker and LXC on the internet, and I am not an expert in either of those.

OK, how do I install it?

Installing docker is very very easy. I’d like to install the latest version so I chose the install.sh method:

marc@ubuntu-docker:~$ curl -s https://get.docker.io/ubuntu/ | sudo sh

which will install Docker on Ubuntu 14.04 LTS:

marc@ubuntu-docker:~$ sudo docker version
[sudo] password for marc:
Client version: 1.0.1
Client API version: 1.12
Go version (client): go1.2.1
Git commit (client): 990021a
Server version: 1.0.1
Server API version: 1.12
Go version (server): go1.2.1
Git commit (server): 990021a

We’ll run an existing Ubuntu container:

marc@ubuntu-docker:~$sudo docker run -i -t ubuntu:latest /bin/bash
root@c79dd9737655:/# ps
PID TTY TIME CMD
1 ? 00:00:00 bash
10 ? 00:00:00 ps

As we can see there are just 2 processes running, bash and ps, we’re using the run command to tell the docker daemon:

  • it needs to run the latest ubuntu image
  • run the bash command
  • -i for interactive.
  • -t for tty alike output.

If we open another terminal and type: sudo docker ps:

marc@ubuntu-docker:~$ sudo docker ps
CONTAINER ID         IMAGE                  COMMAND      CREATED                STATUS                PORTS      NAMES
f96145b294f1           ubuntu:14.04     /bin/bash       12 minutes ago      Up 12 minutes                       pensive_pike

we see the container with its properties. If I remove the /var directory INSIDE the container and run sudo docker diff c79dd9737655:

marc@ubuntu-docker:~$ sudo docker diff f96145b294f1
D /var

We can see that the differences from the base image, in this case the /var directory has been deleted. Let’s say we want to commit the changes we made in our running container, list the image, and create a new container from the image we created:

marc@ubuntu-docker:~$ sudo docker commit f96145b294f1 marc/varless-ubuntu
2663d7b29f530ae6f69fca7e96b7503bbd3413a39ff568fbb7db0ce3c773bb74

marc@ubuntu-docker:~$ sudo docker images
REPOSITORY                    TAG      IMAGE ID              CREATED                        VIRTUAL SIZE
marc/varless-ubuntu    latest    2663d7b29f53   About a minute ago    275.5 MB

marc@ubuntu-docker:~$ sudo docker run -i -t marc/varless-ubuntu bash
root@6a714e6b6d01:/#

 

Since specifying sudo is quite frankly a pain in the *ss lets build add an alias to our bash profile🙂

echo “alias docker=’sudo docker'”>> .bashrc

source .bashrc

marc@ubuntu-docker:~$ docker
[sudo] password for marc:

 

now that we have have everything in place, let’s create an Ubuntu container with netcat listening to port 8080

marc@ubuntu-docker:~$ docker run -p :8080 -i -t ubuntu bash
root@e0b691d06473:/# apt-get install netcat -y

Setting up netcat (1.10-40) …
root@e0b691d06473:/# netcat -l 8080

ON A NEW TERMINAL

marc@ubuntu-docker:~$ docker port e0b691d06473 8080
0.0.0.0:49153
marc@ubuntu-docker:~$ curl localhost:49153

ON THE CONTAINER

root@e0b691d06473:/# netcat -l 8080
GET / HTTP/1.1
User-Agent: curl/7.35.0
Host: localhost:49153
Accept: */*

 

We receive the curl request in the netcat listening process. If we created the the container with the -p option to 8080:8080 the container 8080 port would’ve been mapped to the 8080 port instead of being mapped to a random port. We’ll also use the -rm option (–rm now) to automatically delete the container when it exits.

marc@ubuntu-docker:~$ docker run -rm -p 8080:8080 -i -t ubuntu bash
Warning: ‘-rm’ is deprecated, it will be replaced by ‘–rm’ soon. See usage.

marc@ubuntu-docker:~$ docker port 06b064df2b6b 8080
0.0.0.0:8080

To see the stopped containers and their exit status we can use docker ps -a, and since we’ll have a lot of them we want to remove them because we did not do anything of relevance:

marc@ubuntu-docker:~$ docker rm $(docker ps -aq)
e0b691d06473
993ebd7f46a0

 

I’ll do a series of posts on Docker, honestly, because it’s very very cool and it a lot of playing around :) . In the next post I’ll introduce some more concepts as well as examples of how to use Dockerfiles.

You can read more about it on http://docs.docker.com/

 

Docker Series Index:

  1. Docker Introduction
  2. Docker – the DockerFile

Responder

Introduce tus datos o haz clic en un icono para iniciar sesión:

Logo de WordPress.com

Estás comentando usando tu cuenta de WordPress.com. Cerrar sesión / Cambiar )

Imagen de Twitter

Estás comentando usando tu cuenta de Twitter. Cerrar sesión / Cambiar )

Foto de Facebook

Estás comentando usando tu cuenta de Facebook. Cerrar sesión / Cambiar )

Google+ photo

Estás comentando usando tu cuenta de Google+. Cerrar sesión / Cambiar )

Conectando a %s