WordPress from Development to Production using Docker Part II

This post continues where part one left off. Topics include mysql data migration, staging and production docker configurations with optional https.

To perform these steps, you will need to either:

  1. Create the files directly on the server using nano, vim or some other command line editor.
  2. Create the files on your local machine and copy them to the server using scp.

I prefer the latter method. Creating the files locally in a folder structure that mirrors my home directory on the server makes it easier to edit and deploy them to the respective server. Not to mention the local backup this creates. Use scp from the command line or use a GUI application such as FileZilla or WinSCP.

Reverse Proxy

For staging and production, create a docker-compose.yml file that defines a reverse-proxy service using the official Traefik image. It is a good idea to put the Traefik Docker files in its own folder to make it easier when adding more sites later. Create a folder on the server for your Traefik files and save the compose yaml there.

docker-compose.yml
version: "3"

services:
  proxy:
    image: traefik
    restart: unless-stopped
    # command: --docker --logLevel=DEBUG
    command: --docker --logLevel=INFO
    networks:
      - webgateway
    ports:
      - "80:80"
      - "8080:8080"
      - "443:443"
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - ./certs:/etc/traefik/certs
      # `chmod 600 acme.json`
      - ./acme.json:/etc/traefik/acme.json
      - ./traefik.toml:/etc/traefik/traefik.toml
      # - $PWD/.htpasswd:/etc/traefik/.htpasswd

networks:
  webgateway:
    driver: bridge
  • This Traefik docker-compose.yml uses Let’s Encrypt for free SSL certificate renewal. Create a traefik.toml and acme.json file to set that up. If you already have an SSL certificate for your site, use the certs folder to store the certificate files there for mounting to the Traefik container.
traefik.toml
defaultEntryPoints = ["http", "https"]

[entryPoints]
  [entryPoints.http]
  address = ":80"
  [entryPoints.https]
  address = ":443"
    [entryPoints.https.tls]
      # comment these lines if not in use
      # non let's encrypt certificates
      [[entryPoints.https.tls.certificates]]
      certFile = "/etc/traefik/certs/myothersite.com.crt"
      keyFile = "/etc/traefik/certs/myothersite.com.key"

[acme]
email = "gilfoyle@piedpiper.com"
storage = "acme.json"
onHostRule = true
# caServer = "http://172.18.0.1:4000/directory"
entryPoint = "https"
  [acme.httpChallenge]
  entryPoint = "http"

The acme.json file is used by Traefik for certificate storage. Create this file and set the permissions to 600 so only the owner can read and write. For example.

# change to the folder where our Traefik docker files are stored
cd docker/traefik

# create an empty acme.json file
touch acme.json

# set permissions to 600
chmod 600 acme.json

Site

Once your docker image is deployed, you are ready to create a staging and production docker-compose.yml file for mysite.

docker-compose.yml
version: '2'

services:
  app:
    image: mysite:4.9.6-1.0
    restart: unless-stopped
    environment:
      - WORDPRESS_DB_HOST:mysql:3306
      - WORDPRESS_DB_USER=wordpress
      - WORDPRESS_DB_PASSWORD=changeme!
      - WORDPRESS_DB_NAME=wordpress
    labels:
      - traefik.frontend.rule=Host:local.vm.mysite.com
      - traefik.docker.network=traefik_webgateway
    volumes:
      - app:/var/www/html
    networks:
      - web
      - backend
    links:
      - mysql

  mysql:
    image: mariadb
    restart: unless-stopped
    labels:
      - "traefik.enable=false"
    environment:
      - MYSQL_ROOT_PASSWORD=changeme!
      - MYSQL_DATABASE=wordpress
      - MYSQL_USER=wordpress
      - MYSQL_PASSWORD=changeme!
    volumes:
      - mysql:/var/lib/mysql
    networks:
      - backend

networks:
  web:
    external:
      name: traefik_webgateway
  backend:
    driver: bridge

volumes:
  app:
  mysql:
  • For production, replace label traefik.frontend.rule=Host:local.vm.mysite.com with the registered host. You can specify both without and with www, for example: traefik.frontend.rule=Host:mysite.com,www.mysite.com.

To recap, here is how we have structured our Docker files in the users home folder on the server for mounting them to volumes and running the containers with docker-compose. The docker/traefik/certs folder is only needed for non Let’s Encrypt certificate storage, for example, certificates purchased from SSLs.com. Note that we did not create an additional site, e.g., myothersite. It is included below only to demonstrate that multiple sites can be added using this setup and they can use either Let’s Encrypt or purchased certs.

  • docker
    • mysite
      • docker-compose.yml
    • myothersite
      • docker-compose.yml
    • traefik
      • docker-compose.yml
      • acme.json
      • traefik.toml
      • certs

Run

Start the containers on the server using docker-compose. Do this by opening a secure shell using ssh. For example,

ssh gilfoyle@172.10.10.10

# change to the directory where you uploaded the
# traefik docker-compose.yml file using scp
cd docker/traefik

# start the traefik container
docker-compose up -d

# change to the directory where you uploaded the
# mysite docker-compose.yml file using scp
cd docker/mysite

# start the app and mysql containers defined in the mysite/docker-compose.yml
docker-compose up -d

To view the site on staging, update your systems hosts file to direct local.vm.mysite.com requests to the staging server IP address. For example:

172.10.10.10    local.vm.mysite.com

Load the site in a browser and install WordPress, for example, http://local.vm.mysite.com.

Data Migration

Once your WordPress site is installed, you may want to restore the data from another mysql volume. Perhaps your dev server has the data you now want on your staging instance of WordPress.

1. Perform a mysql dump to get the data from the running mysql container on the dev server. For example, using the development environment from part one, our container is named my_wordpress_project_mariadb.

# get the mysql container name by listing all of the containers
docker ps -a

# dump the mysql data into the current directory
# from the running docker container named
# my_wordpress_project_mariadb
docker exec my_wordpress_project_mariadb /usr/bin/mysqldump -u root --password=password wordpress > wordpress.sql

2. Prepare your dev server mysql dump for loading into the staging server. Open the wordpress.sql file in a code editor and replace all instances of the host:port with the staging server host. For example, replace dev.docker.mysite.com:8000 with local.vm.mysite.com.

3. Upload the updated wordpress.sql file to the staging server using scp.

4. ssh into the staging server and load the wordpress.sql into the running mysite mysql container.

# get the mysql container name by listing all of the containers
docker ps -a

# load the mysql data from the current directory
# into the running mysite_mysql_1 docker container
cat wordpress.sql | docker exec -i mysite_mysql_1 /usr/bin/mysql -u root --password=changeme! wordpress
  • After loading a new Docker image, you will need to remove the pre-existing app volume before bringing up the app container for the new image. Use docker volume ls to list the volumes and docker volume rm to remove the app image. For example, docker volume rm mysite_app.

That’s it!

WordPress from Development to Production using Docker

This post will cover how to use Docker for a local WordPress development environment and how to deploy it to an Ubuntu Linux server running docker.

What you will need.

  • Docker
  • VPS with Docker (for prod deployment)
  • WordPress

The first step is to get your local development environment set up for your WordPress site. There are quite a few ways I have setup this environment in the past. For the last year or so, I’ve been using Wodby’s Docker-based WordPress Stack with its many options.

Using git, clone docker4wordpress into a site folder. For example mysite.

git clone https://github.com/wodby/docker4wordpress.git mysite

Update the .env file, for example:

PROJECT_NAME=mysite
PROJECT_BASE_URL=dev.docker.mysite.com

Update your systems hosts file, for example:

127.0.0.1    dev.docker.mysite.com

Delete docker-compose.override.yml as it’s used to deploy vanilla WordPress

Download WordPress and uncompress it into the site folder so you end up with a wordpress folder. For example mysite/wordpress.

Optionally, you could download and extract WordPress using the CLI, for example:

cd mysite
wget http://wordpress.org/latest.tar.gz
tar xfz latest.tar.gz

Docker Compose

Update the docker-compose.yml volume paths to mount the codebase to ./wordpress instead of the root of the site folder. This step is needed for the build configuration. For example, for both of these services nodes, nginx and php, replace ./:/var/www/html with ./wordpress:/var/www/html.

...
    volumes:
      - ./wordpress:/var/www/html

Create a named volume definition to persist the mysql data. At the bottom of the docker-compose.yml file, uncomment and update the volumes node. For example, replace #volumes: with the following:

volumes:
  mysql:

Update the mariadb service to use the named mysql volume. For example, under the mariadb services node, uncomment and update the volumes node. For example:


services:
  mariadb:

    ...

    volumes:
      - mysql:/var/lib/mysql

With Docker running, from the site directory, e.g., mysite, run docker-compose up -d to start containers.

If you’re in Windows and get the following error:

ERROR: for traefik Cannot create container for service traefik: Mount denied: The source path \\\\var\\\\run\\\\docker.sock:/var/run/docker.sock is not a valid Windows path

This worked for me using Cygwin. Before running docker-compose up -d, run export COMPOSE_CONVERT_WINDOWS_PATHS=1. For PowerShell, use $Env:COMPOSE_CONVERT_WINDOWS_PATHS=1 More info: github.com/docker/for-win/issues/1829

Now you’re ready to load the site in a browser, http://dev.docker.mysite.com:8000.

The WordPress install page should appear. After selecting your language, on the following screen, if using the default settings in .env file, enter wordpress for Database Name, Username and Password. The Database Host value is the service name defined in the docker-compose.yml, e.g., mariadb.

docker4wordpress - WordPress Install - Database Configuration
WordPress Install – Database Configuration

Custom Theme

To demonstrate getting a custom theme into the build, let’s make a copy of twentyseventeen and customize it.

cp -r wordpress/wp-content/themes/twentyseventeen wordpress/wp-content/themes/mytheme/

We’re just gonna make a small change to the site title text to show what you can do to make the font size flexible for the default below desktop viewport width.

Edit mytheme/style.css. Under Layout, add font-size: 5vw below the existing font-size rules. For example.

style.css
.site-title {
  ...

  font-size: 24px;
  font-size: 1.5rem;
  font-size: 5vw;

Login to the site and activate mytheme to see the change to the site title font size when you adjust the width of the browser below 768 pixels wide.

Plugins

To demonstrate the inclusion of plugins in the docker build, install a plugin that will be included in the codebase. For example, I like the Yoast SEO plugin. Instead of installing it using the dashboard, download and extract it. Copy the wordpress-seo folder into the wordpress/wp-content/plugins folder. You can verify the installation by logging into the site dashboard and inspecting the plugins page.

Docker Image

The docker image we build for staging and production will be based off the official WordPress image and will only need our themes, plugins and any other changes from the development environment. Create or download this Dockerfile into your site folder. The FROM instruction in this file is set to use the official WordPress base image. You should update this as needed to use the latest version of the image.

For example, to download the Dockerfile using wget

cd mysite
wget https://raw.githubusercontent.com/jimfrenette/docker4wordpress/master/Dockerfile.sh
Dockerfile
## https://github.com/docker-library/wordpress
FROM wordpress:4.9.6-php7.2-apache

## PHP extensions
## update and uncomment this next line as needed
# RUN docker-php-ext-install pdo pdo_mysql

## custom directories and files
## copy them here instead of volume
## /var/www/html/
## wordpress docker-entrypoint.sh runs
## chown -R www-data:www-data /usr/src/wordpress
COPY ./build/ /usr/src/wordpress/

I created some build scripts to accept some parameters and copy files from the development environment into a build folder for the Dockerfile. Update the script as needed, setting the THEME variable to match the folder name of your custom WordPress theme. e.g., THEME = "mysite". Use the appropriate script for your environment:

If you’re in Windows, use the docker-build.ps1 PowerShell script.

If you’re in OS X or Linux, use the docker-build.sh bash script.

Download the build script into your site folder and run it. For example:

cd mysite

# download build script
wget https://raw.githubusercontent.com/jimfrenette/docker4wordpress/master/docker-build.sh

# if initial run,
# set script as executable
chmod +x docker-build.sh

# run
./docker-build.sh

This image shows running the docker-build shell script in the VS Code integrated terminal. Note the file explorer in the left pane contains the docker/stage/mysite-4.9.6-1.0.tar file that was created.

Running docker-build.sh in the VS Code integrated terminal
Run docker-build shell script in VS Code

Deployment

After running the build script, a saved docker image .tar file should exist in the prod or stage folders. One way to deploy the image is to copy the .tar file to the server using scp and load it.

With example Ubuntu user gilfoyle, use scp from the stage folder.

# upload image
cd docker
scp mysite.tar gilfoyle@0.0.0.0:/home/gilfoyle/

Load the Docker image on the server.

# load image
ssh gilfoyle@0.0.0.0
docker load -i mysite.tar

Part two of this post includes stage and prod docker-compose.yml, migrating data, etc.

SSL Certificate Authority for Docker and Traefik

This post documents how to get https working on your local Docker development environment using Traefik as a reverse proxy for multiple services.

Step 1 – Root SSL Certificate

Create a sub directory to store generated keys, certificates and related files in your home folder, for example .ssl.

Using OpenSSL, generate the private key file, rootCA.key.

For Windows, use Cygwin, Git Bash, PowerShell or other Unix-like CLI.

mkdir .ssl
cd .ssl

openssl genrsa -des3 -out rootCA.key 2048

This root certificate can be used to sign any number of certificates you may need to generate for individual domains.

Using the key, create a new root SSL certificate file named rootCA.cer. This certificate will be valid for 10 years (3650 days).

openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 3650 -out rootCA.cer

Country Name (2 letter code) [AU]:US
State or Province Name (full name) [Some-State]:Random
Locality Name (eg, city) []:Random
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Random
Organizational Unit Name (eg, section) []:Random
Common Name (e.g. server FQDN or YOUR name) []:Local Certificate
Email Address []:example@domain.com
  • If you need a .pem file, simply copy and rename the generated certificate file. For example, cp rootCA.key rootCA.pem

Import The Root Certificate

The host system needs to have the root certificate imported and set as trusted so all individual certificates issued by it are also trusted.

Run Win + R and enter mmc to open the Microsoft Management Console.

Then select File, Add/Remove Snap-ins.

Select Certificates from the available snap-ins and press the Add button.

From the Certificates Snap-in dialog, select Computer account > Local Account, and press the Finish button to close the window.

Then press the OK button in the Add or Remove Snap-in window.

Select Certificates and right-click Trusted Root Certification Authorities > All Tasks > Import to open the Certificate Import Wizard dialog. Browse for the rootCA.cer digital certificate file .

Open Keychain Access and go to the Certificates category in your System keychain.

Import the rootCA.pem using File > Import Items.

Double click the imported certificate and change the When using this certificate: dropdown to Always Trust in the Trust section.

Step 2 – Domain SSL Certificate

Create a sub directory in the .ssl folder for the local.docker.whoami.com domain certificate.

cd .ssl
mkdir local.docker.whoami.com

These steps are done in the local.docker.whoami.com folder.

Create an OpenSSL configuration file named server.csr.cnf.

server.csr.cnf
[req]
default_bits = 2048
prompt = no
default_md = sha256
distinguished_name = dn

[dn]
C=US
ST=RandomState
L=RandomCity
O=RandomOrganization
OU=RandomOrganizationUnit
emailAddress=noreply@local.docker.whoami.com
CN = local.docker.whoami.com

Create a v3.ext file for the X509 v3 certificate.

v3.ext
authorityKeyIdentifier=keyid,issuer
basicConstraints=CA:FALSE
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
subjectAltName = @alt_names

[alt_names]
DNS.1 = local.docker.whoami.com

Create a certificate key named server.key for the local.docker.whoami.com domain using the configuration settings stored in the server.csr.cnf.

openssl req -new -sha256 -nodes -out server.csr -newkey rsa:2048 -keyout server.key -config <( cat server.csr.cnf )

Certificate signing request is issued using the root SSL certificate to create a local.docker.whoami.com domain certificate. The output is a server.crt certificate file.

openssl x509 -req -in server.csr -CA ../rootCA.pem -CAkey ../rootCA.key -CAcreateserial -out server.crt -days 730 -sha256 -extfile v3.ext

Browser - Root Certificate Import

Import the root CA certificate into the web browser trusted root certificate store.

Chrome: Settings (chrome://settings/) > Privacy and security, Manage certificates > Import, follow prompts to Browse for .ssl/rootCA.cer. In The Import Wizard Dialog | Certificate Store, Choose Automatically select the certificate store based on the type of certificate.

Firefox: Options > Privacy & Security, Certificates | View Certificates, Authorities > Import. Browse for the .ssl/rootCA.cer and select all 3 Trust checkboxes.

Traefik

Create a traefik project folder with a certs sub directory.

Copy the domain certificate and key into the traefik/certs folder and rename them to match the respective domain. For example,

cp server.crt traefik/certs/local.docker.whoami.com.crt
cp server.key traefik/certs/local.docker.whoami.com.key

In the traefik project folder, create this traefik.toml file.

traefik.toml
defaultEntryPoints = ["http", "https"]

[entryPoints]
  [entryPoints.http]
  address = ":80"
  [entryPoints.https]
  address = ":443"
    [entryPoints.https.tls]
      [[entryPoints.https.tls.certificates]]
      certFile = "/etc/traefik/certs/local.docker.whoami.com.crt"
      keyFile = "/etc/traefik/certs/local.docker.whoami.com.key"

In in the traefik project folder, create this docker-compose.yml file to configure Traefik services and networks.

docker-compose.yml
version: "3"

services:
  proxy:
    image: traefik
    restart: unless-stopped
    command: --docker --logLevel=DEBUG
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - ./certs:/etc/traefik/certs
      - ./traefik.toml:/etc/traefik/traefik.toml

networks:
  default:
    external:
      name: webgateway

Create this docker-compose.yml file in the whoami project folder.

docker-compose.yml
version: "3"

services:
  app:
    image: emilevauge/whoami
    networks:
      - web
    labels:
      - "traefik.backend=whoami"
      - "traefik.frontend.rule=Host:local.docker.whoami.com"

networks:
  web:
    external:
      name: webgateway

Hosts

Add 127.0.0.1 local.docker.whoami.com to your computers hosts file

Docker

Create a network named webgateway.

docker network create webgateway

Bring up the traefik container followed by the whoami container using docker-compose. For example,

cd ~/traefik
docker-compose up -d

cd ~/whoami
docker-compose up -d

Navigate to https://local.docker.whoami.com in a browser that has imported the root CA certificate into its trusted root store.

Nginx Reverse Proxy

Examples for using Jason Wilder’s Automated Nginx Reverse Proxy for Docker. This solution uses docker-compose files and Jason’s trusted reverse proxy image that contains a configuration using virtual hosts for routing Docker containers.

To set this up, create these directories in a project folder: nginx-proxy, whoami and an optional third one for a node.js app named nodeapp. For the nodeapp, create a Docker image using the these instructions.

  • project
    • nginx-proxy
      • docker-compose.yml
    • nodeapp
      • docker-compose.yml
    • whoami
      • docker-compose.yml

Docker Network

Create a Docker network named nginx-proxy to bridge all of the containers together.

docker network create nginx-proxy

In the nginx-proxy folder, create a new docker-compose.yml file as follows:

docker-compose.yml
version: '3'

services:
  nginx-proxy:
    image: jwilder/nginx-proxy
    restart: always
    ports:
      - "80:80"
    volumes:
      - /var/run/docker.sock:/tmp/docker.sock:ro

networks:
  default:
    external:
      name: nginx-proxy

In the whoami folder, create a new docker-compose.yml file as follows:

docker-compose.yml
version: '3'

services:
  app:
    image: emilevauge/whoami
    environment:
      - VIRTUAL_HOST=local.docker.whoami.com

networks:
  default:
    external:
      name: nginx-proxy

After building the myapp-node image as instructed here, create a new docker-compose.yml file nodeapp folder as follows:

docker-compose.yml
version: '3'

services:
  app:
    image: myapp-node
    user: "node"
    working_dir: /home/node/app
    environment:
      - NODE_ENV=production
      - VIRTUAL_HOST=local.docker.nodeapp.com
    expose:
      - "3000"
    command: "node app.js"

networks:
  default:
    external:
      name: nginx-proxy
  • Take note of the expose and VIRTUAL_HOST environment variables in the docker-compose.yml files for the app services. The nginx-proxy service uses these values to route traffic to the respective container.

Hosts

To test out the nginx-proxy virtual host routing locally, update your hosts file as follows:

127.0.0.1    local.docker.nodeapp.com
127.0.0.1    local.docker.whoami.com

Docker Compose

Use docker-compose up to build, (re)create, start, and attach containers for a service. We’re adding the -d option for detached mode to run the container in the background.

First, bring up the nginx-proxy container.

# change to the nginx-proxy directory
cd nginx-proxy

docker-compose up -d

Bring up the whoami container. Verify that it is running as expected in a web browser at http://local.docker.whoami.com

# change to the whoami directory
cd ../whaomi

docker-compose up -d

Bring up the nodeapp container. Verify that it is running as expected in a web browser at http://local.docker.nodeapp.com

# change to the nodeapp directory
cd ../nodeapp

docker-compose up -d

Use docker-compose down to stop and remove containers created by docker-compose up. Additionally, docker-compose stop can be used to stop a running container without removing it. Use docker-compose start to restart an existing container.

Add more folders and docker-compose.yml files as needed. The nginxy-proxy container is mounted to the host docker socket using this /var/run/docker.sock:/tmp/docker.sock volume. Every time a container is added, nginx-proxy has access to the events from the socket and creates the configuration file needed to route traffic and restart nginx making the changes available immediately.

Resources

Node.js Koa Container

An example of how to create a Docker container application using Koa.js Next generation web framework for Node.js.

In the project root, initialize using Yarn or npm.

yarn init -y

Install dependencies.

yarn add koa
yarn add koa-body
yarn add koa-logger
yarn add koa-router
yarn add koa-views
yarn add swig

Create an app folder in the project root.

In the app folder, create a folder named lib. Then create this render.js module in the new lib folder.

render.js
/**
 * Module dependencies.
 */

const views = require('koa-views');
const path = require('path');

// setup views mapping .html
// to the swig template engine

module.exports = views(path.join(__dirname, '/../views'), {
  map: { html: 'swig' }
});

In the app folder, create a folder for templates named views. Then create this index.html template in the new views folder.

index.html
<!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="UTF-8">
  <meta name="viewport" content="width=device-width, initial-scale=1.0">
  <meta http-equiv="X-UA-Compatible" content="ie=edge">
  <title>Document</title>
</head>
<body>
  <h1>{{content}}</h1>
</body>
</html>
  • Using Emmet, which is built into VS Code, you can create the index.html content by entering an exclamation mark on the first line then select the tab key.

In the app folder, create this app.js application entrypoint file.

app.js
const render = require('./lib/render');
const logger = require('koa-logger');
const router = require('koa-router')();
const koaBody = require('koa-body');

const Koa = require('koa');
const app = module.exports = new Koa();

// middleware

app.use(logger());

app.use(render);

app.use(koaBody());

// route definitions

router.get('/', index);

app.use(router.routes());

async function index(ctx) {
  await ctx.render('index', { content: 'Hello World' });
}

// listen

if (!module.parent) app.listen(3000);

Project structure

  • project
    • package.json
    • app
      • app.js
      • lib
        • render.js
      • views
        • index.html

Test the application locally in a browser at http://localhost:3000. Use Ctrl + C to kill the app after verifying it works.

cd app
node app.js

Docker

To containerize the application, create a docker-compose.yml file in the project root as follows.

docker-compose.yml
version: '3'

services:
  app:
    image: node:alpine
    user: "node"
    working_dir: /home/node/app
    environment:
      - NODE_ENV=production
    ports:
      - "3000:3000"
    volumes:
      - ./app:/home/node/app
      - ./node_modules:/home/node/node_modules
    expose:
      - "3000"
    command: "node app.js"

Build, (re)create and start, the container in disconnected mode. The app folder is attached as a volume and mapped to the working directory, /home/node/app in the container. The node app.js command is executed in the containers working directory.

docker-compose up -d

Test the application locally in a browser at http://localhost:3000. Use Ctrl + C to kill the app after verifying it works.

Stop and remove the container and volumes created by docker-compose up.

docker-compose down

Build a Docker image for better performance and deployment once initial development is completed. Instead of mapping the local app and node_modules folders to the container, copy files and folders into the container, set working directories and run commands as needed.

Create this Dockerfile in the project root

Dockerfile
FROM node:alpine
WORKDIR /home/node

# using wildcard (*) to copy both package.json and package-lock.json
COPY package*.json /home/node/
RUN yarn install --production

# create and set app directory as current dir
WORKDIR /home/node/app
COPY app/ /home/node/app/
EXPOSE 3000
CMD ["node", "app.js"]

Build the image and tag it. In the project root run the following command.

docker build -t myapp-node .

Test the new myapp-node Docker image using docker run. Same URL as before, http://localhost:3000.

docker run -u node -w /home/node/app -e NODE_ENV=production -p 3000:3000 --expose 3000 myapp-node node "app.js"

Stop the container using docker stop followed by the container ID. To get a list of all running containers, use docker ps --filter status=running.

That’s it!

VPS Proof of Concept for Docker and Traefik

This is a proof of concept for a VPS that includes ConfigServer Firewall (csf), Docker, Open SSH Server and Traefik as a reverse proxy to host multiple applications on the same Docker host.

The following notes document my experience while creating and configuring the VPS proof of concept local Virtual Machine with Ubuntu Server 16.04 on a Windows 10 host.

Virtual Machine

Since I am on my Windows 10 laptop for this, I used Hyper-V, an optional feature of Windows 10 Enterprise, Professional, or Education versions. Visit Install Hyper-V on Windows 10 | Microsoft Docs for more information on how to enable it. Virtual Machine creation from an iso image is fairly straight forward. More info at Create a Virtual Machine with Hyper-V | Microsoft Docs.

For installation, I downloaded the 64-bit Ubuntu Server 16.04.3 LTS (ubuntu-16.04.3-server-amd64.iso) bootable image from ubuntu.com/download/server.

  • Docker requires a 64-bit installation with version 3.10 or higher of the Linux kernel.

Create a Virtual Switch

Open Hyper-V Manger and select Virtual Switch Manager, and from there, select Create a Virtual Switch. For example, Name: WiFi Virtual Switch
Connection type: External Network Killer Wireless n/a/ac 1535 Wireless Network Adapter

  • With Hyper-V, To get external internet/network access with your VM, you need to create an External Virtual Switch. This will use your networks DHCP server, bridge mode if you will.

SSH Server

Install and configure OpenSSH. Once OpenSSH is installed, the virtual machine can be run headless and administered using secure shell (ssh) just as we would a VPS.

For a VPS, it is recommended that a non-root user with sudo privileges is used instead of root. Therefore, create a new user and add them to the sudo group. Instructions are available in the sudo section on my Linux page. After that’s done, disallow root password login.

ufw instead of csf

The default firewall configuration tool for Ubuntu is ufw. To use ufw instead of ConfigServer Firewall, configure and enable ufw prior to installing Docker. For example:

# allow incoming port 22
ufw allow ssh

# allow incoming ports
ufw allow 80
ufw allow 443

# turn it on
ufw enable

# verify
ufw status verbose

To                 Action      From
--                 ------      ----
22                 ALLOW IN    Anywhere
80                 ALLOW IN    Anywhere
443                ALLOW IN    Anywhere
22 (v6)            ALLOW IN    Anywhere (v6)
80 (v6)            ALLOW IN    Anywhere (v6)
443 (v6)           ALLOW IN    Anywhere (v6)

Docker

To install the latest version of Docker, add the GPG key for the official Docker Ubuntu repository as a trusted APT repository key.

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

Add the Docker repository to APT sources and update the package database.

sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

sudo apt-get update

Ensure that APT pulls from the correct repository.

apt-cache policy docker-ce

Install the latest version of Docker CE.

sudo apt-get install -y docker-ce

Docker should now be installed, the daemon started, and the process enabled to start on boot. Check that it’s running.

sudo systemctl status docker
systemctl status docker output

Docker Compose

To get the latest release, install Docker Compose from Docker’s GitHub repository. Visit https://github.com/docker/compose/releases to lookup the version number. Then use curl to output the download to /usr/local/bin/docker-compose. For example,

# check current release, update as needed
sudo curl -L https://github.com/docker/compose/releases/download/1.19.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose

# make executable
sudo chmod +x /usr/local/bin/docker-compose

# verify installation
docker-compose --version

Config Server Firewall (CSF)

Config Server Firewall contains a straight forward easy to understand configuration file. CSF also comes with a Login Failure Daemon that will alert you of large scale login attempts on ssh, mail and other servers. CSF also allows you to whitelist or blacklist IP addresses aside from the LFD real time monitoring and automatic IP blocking.

Disable the default firewall service.

sudo ufw disable

Since CSF is currently not available in the Ubuntu repositories, download it from the ConfigServer’s website into the home directory.

cd ~/

wget http://download.configserver.com/csf.tgz

Unpack the downloaded TAR archive.

tar -xvzf csf.tgz

Run the install script.

cd csf
sudo bash install.sh

Verify the installation,


sudo perl /usr/local/csf/bin/csftest.pl

If everything is fine, you should see the following output.

Testing ip_tables/iptable_filter...OK
Testing ipt_LOG...OK
Testing ipt_multiport/xt_multiport...OK
Testing ipt_REJECT...OK
Testing ipt_state/xt_state...OK
Testing ipt_limit/xt_limit...OK
Testing ipt_recent...OK
Testing xt_connlimit...OK
Testing ipt_owner/xt_owner...OK
Testing iptable_nat/ipt_REDIRECT...OK
Testing iptable_nat/ipt_DNAT...OK

RESULT: csf should function on this server

Optional cleanup: remove unpacked TAR files after the install has been verified.

cd ../
rm -rf csf

The next page covers CSF Docker configuration, basic auth for port specific password protection and Traefik Docker configuration.

Docker Laravel Dev Environment

This post documents building a local Laravel development environment with Docker. Included are examples for debugging Laravel’s PHP with Xdebug using the Visual Studio Code editor. Source Code available on GitHub.

Install Laravel

In this example, we will be using the Composer Dependency Manager for PHP to install Laravel. To check if Composer is installed globally and in your PATH, enter composer --version in the CLI, for example,

composer --version
Composer version 1.4.1 2017-03-10 09:29:45

Get Composer if you need to install it.

With Composer, use the create-project command and the laravel/laravel package name followed by the directory to create the project in. The optional third argument is for a version number. For example, to install Laravel version 5.5 into a local directory such as home/laravel/mysite on the host computer, open a CLI then execute the composer create-project command from the directory where you want to create your project. For example:

cd ~/laravel

composer create-project laravel/laravel mysite "5.5.*"

Docker

Once composer is finished creating the Laravel project, we are ready to create the docker containers to host it. The following examples require both Docker and Docker Compose. Head on over to Docker, select Get Docker and the platform of your choice to install Docker on your computer. This should install both Docker and Docker Compose. The examples in this post were written while using Docker Community Edition, Version 17.09.0-ce.

Start Docker as needed and test the installation. Open a CLI and issue these commands.

docker --version

docker-compose --version

For the Windows platform with IIS installed, if port 80 is conflicting, shut down the IIS web server. Open an admin command prompt and enter net stop was

PHP Container

This dockerfile builds from the official Docker Hub PHP image. At the time of this writing, php 7.1.10 was the latest non RC version available in the library. If you want to use a different version, change the php version as needed, such as FROM php:5.6.31-fpm.

Create the app.dockerfile in the root of the laravel project. For example, home/laravel/mysite/app.dockerfile

app.dockerfile
FROM php:7.1.10-fpm

# php-fpm default WORKDIR is /var/www/html
# change it to /var/www
WORKDIR /var/www

RUN apt-get update && apt-get install -y \
    libmcrypt-dev \
    mysql-client --no-install-recommends \
    && docker-php-ext-install mcrypt pdo_mysql \
    && pecl install xdebug \
    && echo "zend_extension=$(find /usr/local/lib/php/extensions/ -name xdebug.so)\n" >> /usr/local/etc/php/conf.d/xdebug.ini \
    && echo "xdebug.remote_enable=1\n" >> /usr/local/etc/php/conf.d/xdebug.ini \
    && echo "xdebug.remote_autostart=1\n" >> /usr/local/etc/php/conf.d/xdebug.ini \
    && echo "xdebug.remote_connect_back=0\n" >> /usr/local/etc/php/conf.d/xdebug.ini \
    && echo "xdebug.remote_host=10.0.75.1\n" >> /usr/local/etc/php/conf.d/xdebug.ini \
    && echo "xdebug.remote_port=9001\n" >> /usr/local/etc/php/conf.d/xdebug.ini \
    && echo "xdebug.idekey=REMOTE\n" >> /usr/local/etc/php/conf.d/xdebug.ini

For Mac platforms, change the app.dockerfile xdebug remote host IP, e.g., xdebug.remote_host=10.254.254.254

Then create an alias for IP 10.254.254.254 to your existing subnet mask as follows:

sudo ifconfig en0 alias 10.254.254.254 255.255.255.0

If you want to remove the alias, use sudo ifconfig en0 -alias 10.254.254.254

Nginx Container

Create a nginx server block configuration file for php and the Laravel project root.

nginx.conf
server {
    listen 80;
    index index.php index.html;
    root /var/www/public;

    location / {
        try_files $uri /index.php?$args;
    }

    location ~ \.php$ {
        fastcgi_split_path_info ^(.+\.php)(/.+)$;
        fastcgi_pass app:9000;
        fastcgi_index index.php;
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_param PATH_INFO $fastcgi_path_info;
    }
}

This dockerfile builds from the official Docker Hub nginx image adding the nginx.conf Nginx server block configuration file to /etc/nginx/conf.d on the container.

web.dockerfile
FROM nginx:1.12

ADD nginx.conf /etc/nginx/conf.d/default.conf

The next page covers creating the docker-compose file, container build, create, start and stop. Xdebug and launch config for VS Code PHP debugging.


Docker Drupal Dev Environment

This post documents mounting a new Drupal Composer project as a volume in a Docker container. Features include Drush, Drupal Console, mailhog and phpMyAdmin. A Docker-sync configuration is available for OS X.

Composer

With the release of Drupal 8.3, using Composer to manage Drupal projects has vastly improved and is becoming a best practice. See Getting Started at getcomposer.org to install Composer.

Drupal Composer Template

Using Composer and the Composer template for Drupal projects to create a new Drupal site.

Open a CLI and execute the composer create-project command from the directory where you want to create your project. For example,

composer create-project drupal-composer/drupal-project:8.x-dev mysitefolder --stability dev --no-interaction

Docker

The docker-compose.yml file from Docker4Drupal image stack is optimized for local development. Use curl to download the file into the root of your Drupal project. For example,

PowerShell
curl https://raw.githubusercontent.com/wodby/docker4drupal/master/docker-compose.yml -Outfile mysitefolder\docker-compose.yml
Bash
curl -o mysitefolder/docker-compose.yml  https://raw.githubusercontent.com/wodby/docker4drupal/master/docker-compose.yml

Update the docker-compose.yml file. Create a named volume for data persistence in the mariadb node.

docker-compose.yml
...
services:
  mariadb:
    ...
    volumes:
      - mysql:/var/lib/mysql
  • Note that an ellipsis … in the code snippets are not a part of the code and are there only to denote code that is being skipped and not applicable to the example. View all of the docker-compose.yml updates on GitHub.

In the php node, comment out the vanilla Drupal image node and uncomment the image without Drupal. Additionally, change the volume to mount the relative local directory ./ to /var/www/html in the container.

...

  php:
    image: wodby/drupal-php:7.1-2.1.0
    ...
    volumes:
      - ./:/var/www/html

In the nginx node, change the volume to mount the relative local directory ./ to /var/www/html in the container.

...

  nginx:
    ...
    volumes:
      - ./:/var/www/html

For data persistence, in the volumes node at the bottom of the docker-compose.yml file, replace the unused codebase volume with mysql.

...

  volumes:
    mysql:

Run containers

From the site folder, e.g., mysitefolder, execute docker-compose

docker-compose up -d

Drupal Console

Test drive Drupal Console by connecting to the php container.

docker-compose exec --user=82 php sh

List all of the Drupal Console commands.
Disconnect from the session with Ctrl+D

drupal list
Source Code

Resources

Docker WordPress Dev Environment

This post documents setting up local development environments with Docker using the official WordPress Docker repository as a base image. Page two includes configurations for remote PHP debugging with Xdebug and VS Code.

Docker Compose

Aside from making the container configuration easier to understand, Docker Compose coordinates the creation, start and stop of the containers used together in the environment.

The environment has multiple sites served on various ports including at least one WordPress site, a phpmyadmin site and mysql. For virtual host names, nginx-proxy is being used for containers with virtual host environment properties set in their docker compose data. Additionally, the mysql db container is accessible from the host at 127.0.0.1:8001

Create this docker compose yaml file in the projects root directory with the nginx-proxy configuration.

nginx-proxy.yml
version: "2"
services:
  nginx-proxy:
    image: jwilder/nginx-proxy
    container_name: nginx-proxy
    ports:
      - "80:80"
    volumes:
      - /var/run/docker.sock:/tmp/docker.sock:ro

Create this docker compose yaml file for the WordPress stack. This includes the linked MariaDB database and phpMyAdmin containers from their official repositories. Xdebug is not included in the official WordPress image on Docker Hub and will not be included in this configuration since it is using unmodified images. Adding xdebug and rebuilding the image is covered on page two.

wp.yml
version: "2"
services:
  db:
    image: mariadb
    volumes:
      - mysql:/var/lib/mysql
    ports:
      - "8001:3306"
    environment:
      - MYSQL_ROOT_PASSWORD=secret
  phpmyadmin:
    image: phpmyadmin/phpmyadmin:latest
    ports:
      - "8002:80"
    links:
      - db:mysql
    environment:
      - MYSQL_ROOT_PASSWORD=secret
      - VIRTUAL_HOST=phpmyadmin.app
      - VIRTUAL_PORT=8002
  wp:
    image: wordpress
    volumes:
      - ./wordpress:/var/www/html
    ports:
      - "8003:80"
    links:
      - db:mysql
    environment:
      - WORDPRESS_DB_PASSWORD=secret
      - VIRTUAL_HOST=wordpress.dev
      - VIRTUAL_PORT=8003
volumes:
  mysql:

Update your systems hosts file.

hosts
# Docker (nginx-proxy)
127.0.0.1 phpmyadmin.app
127.0.0.1 wordpress.dev

Navigate to your project root in your CLI, such as Terminal on OS X, PowersShell or Cygwin on Windows.

Create the Containers

Create new nginx-proxy and WordPress containers using the up command with docker-compose.

docker-compose -f nginx-proxy.yml -f wp.yml up
  • The -f flags specify the compose files to use. Multiple compose files are combined into a single configuration. This multiple file solution is for demonstration purposes. Here is a single file example that can be run without a file flag.

Stop Containers

Stop the containers without removing them.

docker-compose -f wp.yml -f nginx-proxy.yml stop

Start Containers

Start the stopped containers. Include the nginx-proxy.yml first so when the WordPress containers are started the virtual hosts can be dynamically configured.

docker-compose -f nginx-proxy.yml -f wp.yml -f start
  • If you have restarted your computer and another process is using the nginx-proxy port, e.g., 80, you will need to halt that process before starting the container.

Shutdown Containers

Shutdown the environment using the down command. If your data is not stored in a volume, it will not persist since this will remove the containers.

docker-compose -f wp.yml -f nginx-proxy.yml down

The next page covers adding Xdebug and configuring VS Code for remote debugging.


Docker

This page is a collection of various Docker commands and resources. For more info, read the Docker Documentation. To learn what Docker is, the official Docker Overview is an excellent resource.

Containers

# list
docker ps -a
 
# list running
docker ps --filter status=running

# stop all
docker stop $(docker ps -a -q)

# remove all
docker rm $(docker ps -a -q)

# remove
docker rm [CONTAINER ID]

# remove associated data volumes
docker rm -v [CONTAINER ID]

# logs
docker logs [CONTAINER ID]

# shell into
docker exec -it [CONTAINER ID] sh

Images

# list
docker images -a

# remove image
docker rmi [IMAGE ID]

# remove dangling images
docker rmi $(docker images -q -f dangling=true)

# remove all images
docker rmi $(docker images -a -q)

# save image
docker save -o myimage.tar myimage:latest

# load image into docker
docker load -i myimage.tar
  • Docker save / load handy for deploying an image build to a server. e.g., copy the image .tar file to server using scp and then load it into Docker.

Volumes

# list
docker volume ls

# remove volume
docker volume rm [VOLUME NAME]

# remove all volumes
docker volume rm $(docker volume ls -q)

# list dangling volumes
docker volume ls -f dangling=true

# remove dangling volumes (1.13+)
docker volume prune

System

# docker disk usage
docker system df

# system-wide info 
docker system info

# real time server events
docker system events

# removes
# - all stopped containers
# - all networks not used by at least one container
# - all dangling images
# - all build cache
docker system prune

Docker Compose

# shell access to any container
docker-compose exec [SERVICE] sh

# using user www-data (82) for Nginx and PHP containers
docker-compose exec --user=82 php sh

Resources for Docker Management

  • >dockly_
    Immersive terminal interface for managing docker containers and services
  • MicroBadger
    Inspect container images and their metadata
  • Portainer
    Portainer is an open-source lightweight management UI which allows you to easily manage your Docker hosts or Swarm clusters.

Other Resources

Convert docker run commands into docker-compose blocks with composerize.com


Adobe Experience Manager Docker configuration for author, publisher and dispatcher.

aem-docker

clean-docker-for-mac.sh

Drupal Dev Environment

Drupal Composer project mounted as a volume in a Docker container. Features include Drush, Drupal Console, mailhog and phpMyAdmin. A Docker-sync configuration is available for OS X.

Docker Drupal Dev Environment – Drupal Composer

Laravel Dev Environment

Building a local Laravel development environment with Docker. Included are examples for debugging Laravel’s PHP with Xdebug using the Visual Studio Code editor. Source Code available on GitHub. Read More

WordPress Dev Environment

This version documents how to use Docker for a local WordPress development environment with Traefik and how to deploy it to an Ubuntu Linux server running docker.

WordPress from Development to Production using Docker

This version documents a local development environment for WordPress using nginx-proxy, Docker and Docker Compose. Aside from making the container configuration easier to understand, Docker Compose coordinates the creation, start and stop of the containers used together in this environment.

Docker WordPress Dev Environment – Xdebug with VS Code