Using Docker Compose for a Multi-App Portfolio
Maintaining a portfolio showcase of different apps can be a daunting task. Applications, particularly ones that are not developed in the same environment, often make contradictory assumptions about what they can do inside their environment. For example:
- They assume they have exclusive use of a port number.
- They make assumptions about where their root URL.
0.0.0.0/my_home_page
vs0.0.0.0/app_A/my_home_page
- They assume they have full control of their dependencies
- Application
A
requires version 1 of packageP
, but ApplicationB
requires version 2.
- Application
- They assume other parts of a service ecosystem are available and that they
have free reign over them.
- Both
A
andB
assumes they have full control of a PostgreSQL DB (or another resource).
- Both
The common problem amongst all of these difficulties is that Applications A
and B
are not properly isolated from each other.
Introducing Docker
Docker Containers are lightweight application environments that abstract away the specifics of the OS on which the application is running. From the application point of view, it has complete control over how the OS is configured (e.g. where files are stored, what volumes are mounted, what packages are installed, etc). Containers are not, however, Guest OS’s; they still run on the same kernel as the Host OS!
Containers allow application developers to isolate their apps with minimal overhead: they do not need to run their application on its own physical machine or on a VM. Containers are lightweight by design so that they can be booted and torn down quickly, and so that many containers can run on the same Host OS.
The lightweight isolation provided by Docker containers is a solution to the exact problem we identified with our multi-app architecture. Let’s try using Docker to setup our mutli-app portfolio.
Setting up Docker
Take a moment to go through the Docker installation guide for your OS: https://docs.docker.com/engine/installation/
NOTE: If you are using Mac OS X you should probably take a look at: https://github.com/adlogix/docker-machine-nfs (or something similar). It allows you to configure your Docker Machine to use NFS. Without NFS (or another file system sharing mechanism) you might see some discrepencies between the data that a Docker conainter ‘sees’ in a mounted volume and the actual data you are trying to mount.
For the rest of this post I’m going to refer to the host running the Docker
Engine as DOCKER_MACHINE_HOST
. For Windows and OS X this will be the IP of the
Docker Machine, and for Linux (usually) it is 0.0.0.0
.
Example: Setting up a web server with a Docker container
Later in this post I am going to be using nginx as a reverse-proxy, so let’s use nginx to setup our web server. nginx provides a Dockerfile on Docker Hub that we can use to get started quickly. This Dockerfile specifies a relatively barebones container; it has nginx installed and that’s about it.
It can be run by itself without much configuration:
$ docker pull nginx
$ docker run -p 80:80 nginx
Notice the -p 80:80
flag. This flag links port 80 on the Docker container to
port 80 on the Docker host. Without this, the docker container would be
completely isolated from the host machine.
If you navigate to http://DOCKER_MACHINE_HOST
you can see the default nginx
response when unconfigured:
In order to start serving static content from this container, however, we will need to mount a ‘volume’, with the content we’d like to serve.
Let’s create some simple static content and store it in html/index.html
:
<html>
<h1>Hello World!</h1>
</html>
Now we will ‘mount’ this directory to the Docker container in a place where
nginx can find it. Docker uses a special syntax for mounting volumes to a
container: -v local_dir:container_dir
. Running the following command will
start the nginx container like before, but now serving your custom content.
$ docker run -p 80:80 -v html:/usr/share/nginx/html:ro nginx
NOTE: On OS X and Windows, you may need to specify the full path the html folder
(since your Docker Machine does not use the same file system as your Host OS).
By default, you can only mount directories that are subdirectories of /Users
on OS X and C:\Users
on Windows.
Baking data into containers
The static content could also be stored inside the container image, instead of just mounted as a volume. In order to include the static content in the container image, you could use the following Dockerfile:
FROM nginx
COPY html /usr/share/nginx/html
The docker container could then be built and run to produce the same result:
$ docker build -t static-nginx .
$ docker run -p 80:80 static-nginx
Docker Compose
So far we’ve create a single isolated application that can serve static content, but we haven’t solved the complete problem: getting isolated applications to work together. To integrate our isolated applications into a single application we will use Docker Compose.
Compose is a tool for defining and running multi-container Docker applications.
Docker Compose uses a YAML configuration file to define how to load an application composed of multiple Docker containers working together.
static:
image: nginx
volumes:
- ./html:/usr/share/nginx/html
ports:
- "8080:80"
This file will build the same nginx container we created above using Docker compose. To use the file run, in the same directory as the docker-compose.yaml file.
$ docker-compose up
Multiple Services
Of course, Compose doesn’t just recreate existing Docker functionality for a single container. It lets you define how these containers interact with each other.
Suppose we wanted to move our Hello World application onto a different endpoint.
Instead of using http://my.site.me
to access the content, we actually want to
use http://my.site.me/hello
. We could just change the nginx config file in our
original container … or, we could leave the original container alone and
just create a new container that dispatches connections to the right
container. Doing it this way lets us keep the application logic in the original
container exactly the same as when it is a standalone application. The only
thing that has changed is the root endpoint.
CAVEAT: If your application relies on knowing where it is rooted, then you will
have to make this configurable in your application. Most web frameworks support
this kind of behavior, but you will need to make sure that you don’t make the
root assumption in code that you write (e.g. your JavaScript code relies on
accessing /my_endpoint
without taking an argument for the root).
Let’s create a container that can proxy traffic to other containers. nginx is a natural choice for doing a reverse proxy, so we’ll go ahead and reuse the image we’ve been working on. Since nginx relies on configuration files in order to determine where to proxy traffic to, let’s first create a configuration file that can route requests to different hosts (based on the incoming URL).
server {
server_name www.myportfolio.me;
location /appA/ {
proxy_pass http://appA/;
}
}
Notice how I’m structuring this nginx file: all requests that are prefixed with
appA
in the URL will be proxied to the domain name appA
. There is a little
bit of magic here (I haven’t explained how the nginx Docker container will know
how to send a request to http://appA/
. More on that below.), but the point is
that this configuration file lets us switch on a URL prefix in order to
determine what application to send a request to.
The nginx Docker container we’ve been using expects to find configuration files
mounted at /etc/nginx/conf.d/
, so to use this configuration file with Docker
Compose you could use the following file.
proxy:
image: nginx
volumes:
- ./conf.d:/etc/nginx/conf.d
ports:
- "8080:80"
You could run this file right away, but it wouldn’t produce any interesting
results. http://appA
won’t resolve to anything, so any requests being proxied
would instantly fail. In order to resolve the domain we will need to make use of
the networking layer in Docker Compose.
Networking with Docker Compose
Networking in Docker Compose consists primarily of ’linking’ two services together. Once two services are linked together, they may freely communicate over any port that has a service listening on it. Docker Compose manages these connections (and potential port collisions) by resolving DNS requests. Compose creates DNS records that allow services to access other services using a domain that is equivalent to the name of the service.
For example, services linked using the following Docker Compose YAML file:
db:
image: postgres
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
links:
- db
Would allow the web
service to communicate with the db
service by accessing
http://db
or tcp://db:5432
(you know, for postgres).
Back to our application
Now that I’ve explained the magic we used early in order to proxy a request to
http://appA
, let’s return to our project of creating a multi-app portfolio.
Let’s create a docker-compose.yaml
file that can proxy requests using
Compose’s networking capabilities.
proxy:
image: nginx
volumes:
- ./reverse_proxy.conf:/etc/nginx/conf.d/reverse_proxy.conf
ports:
- "80:80"
links:
- appA
appA:
image: nginx
volumes:
- ./html:/usr/share/nginx/html
If you run Docker Compose with this file, you will be able to view the default
nginx response (from our first nginx proxy) by going to
http://DOCKER_MACHINE_HOST/
.
If you then go to http://DOCKER_MACHINE_HOST/appA
you will see the Hello World
response we created earlier. We’ve just set up our first application in our
multi-app portfolio!
Full Example
So far we’ve only created a single sub-app using Docker Compose. Let’s finally build an actual multi-app portfolio!
First let’s create some static content to serve from each application (remember, each application can be far more complicated than serving static content. I am using static content in order to move quickly).
<html>
<h1>Hello World! I'm application A</h1>
</html>
<html>
<h1>Hello World! I'm application B</h1>
</html>
<html>
<h1>Hello World! I'm application C</h1>
</html>
Then let’s create the configruation for the proxy that will switch between all three applications.
server {
server_name www.myportfolio.me;
location /appA/ {
proxy_pass http://appA/;
}
location /appB/ {
proxy_pass http://appB/;
}
location /appC/ {
proxy_pass http://appC/;
}
}
And finally, let’s tie everything together using Compose.
proxy:
image: nginx
volumes:
- ./reverse_proxy.conf:/etc/nginx/conf.d/reverse_proxy.conf
ports:
- "80:80"
links:
- appA
- appB
- appC
appA:
image: nginx
volumes:
- ./html/appA:/usr/share/nginx/html
appB:
image: nginx
volumes:
- ./html/appB:/usr/share/nginx/html
appC:
image: nginx
volumes:
- ./html/appC:/usr/share/nginx/html
Run docker-compose up -d
on this file, and you should be able to see each of
the three applications index pages at each of their respective endpoints.
NOTE: This is a very simplified example, I don’t recommend doing this whole procedure if you only have a small number of static pages to serve. This approach is designed for presenting full applications that have been developed independently of each other.