Bash

bash history

Remove and prevent duplicates in bash history

Step 1 – remove existing duplicates in .bash_history and create a new temporary file named unduped_history.

# change to the home directory
cd ~/

# remove duplicates and create unduped_history
nl ~/.bash_history | sort -k 2  -k 1,1nr| uniq -f 1 | sort -n | cut -f 2 > unduped_history

Step 2 – copy the unduped_history and overwrite .bash_history.

cp unduped_history ~/.bash_history

# unduped_history can be removed now
rm unduped_history 
  • For zsh history, just replace .bash_history with .zsh_history

Step 3 – to prevent duplicate history entries going forward, add this line to the .bashrc or .zshrc if applicable.

export HISTCONTROL=ignoreboth:erasedups

Vim

Vim logo Probably the most needed shortcut key for those who only use Vim when it pops up in Git, how to exit Vim editor?
Shift + :   (command mode)
q

git branch --list - exit Vim animation

Configuration

The vmap and imap settings are for copy to and paste from the system clipboard. This set guifont string is for linux. The string is slightly different for the same font in OS X and Windows. To get the font string for your system, set the font using the menu, e.g., edit > select font. Then in command mode Esc, enter :set guifont?. Note that spaces need to be escaped with a backslash.

.vimrc
colorscheme koehler
set guifont=Fira\ Code\ weight=453\ 10
vmap <C-c> "+yi
vmap <C-x> "+c
vmap <C-v> c<ESC>"+p
imap <C-v> <ESC>"+pa

Here is the OS X configuration. Note that the Ctrl key modifier has been changed from C to D mapping the ⌘ command key instead.

colorscheme koehler
set guifont=Fira\ Code\ Retina:h11
vmap <D-c> "+yi
vmap <D-x> "+c
vmap <D-v> c<ESC>"+p
imap <D-v> <ESC>"+pa

Commands

Vim Cheat Sheet

WSL ubuntu zsh nvm etc.

The Windows Subsystem for Linux (WSL) seems to be mature enough now to give it another shot. Copy and paste, and other simple annoyances that kept me away before are working better. Also, I’ve been reading that nvm (Node Version Manager) works now, so here goes.

As a precursor, note that my system is Windows 10 Professional (version 1709, build 16299.309) and is fully up to date with WSL enabled and Ubuntu installed from the Windows Store.

Ubuntu Update

If it has been a while, first thing I like to do with the Ubuntu app is update and upgrade the ubuntu Linux packages. First update the package database with apt-get update, then upgrade with apt-get upgrade or apt-get dist-upgrade. I prefer apt-get dist-upgrade since it will remove obsolete packages and add new ones as needed.

  • Windows update does not change the WSL Ubuntu installation. To upgrade to a new release, run sudo do-release-upgrade in the Ubuntu Terminal.

Oh My Zsh

Oh My Zsh will spruce up your Ubuntu bash and add some additional functionality. Here is a screenshot of my Ubuntu app to illustrate.

WSL Ubuntu bash example oh-my-zsh cowsay etc.

Install Oh My Zsh just as you would on any other ubuntu system. Note that Zsh is a pre-requisite. For installation instructions and more information, visit https://github.com/robbyrussell/oh-my-zsh.

Your going to want to change the font to fix unknown character issues. I’ve installed the DejaVu Sans Mono for Powerline font available here.

WSL Ubuntu bash font properties
  • To set WSL Ubuntu bash with zsh as your integrated terminal in VS Code. Update your user settings file, e.g., %appdata%\Code\User\settings.json with the following.
settings.json
"terminal.integrated.shell.windows": "C:\\Windows\\System32\\bash.exe",
"terminal.integrated.shellArgs.windows": ["-c", "zsh"]

ssh

If you want to use your existing git and/or other ssh keys, copy them from their folder in Windows into a .ssh folder under ubuntu. For example,

cd ~/
mkdir .ssh

# navigate to Windows .ssh folder, e.g.,
cd ../mnt/c/Users/Gilfoyle/.ssh

# copy the keys into ubuntu .ssh folder
cp id_rsa ~/.ssh/
cp id_rsa.pub ~/.ssh/

# set permissions on the private key for github
chmod 600 ~/.ssh/id_rsa

nvm

nvm is a utility for installing and managing multiple versions of node.js.

Install as an Oh My ZSH! custom plugin by cloning zsh-nvm into your custom plugins repo.

cd ~/.oh-my-zsh/custom/plugins

git clone https://github.com/lukechilds/zsh-nvm ~/.oh-my-zsh/custom/plugins/zsh-nvm

Then load as a plugin in your .zshrc profile. Note that plugins need to be added before oh-my-zsh.sh is sourced. For example, here is a snippet from my .zshrc profile.

.zshrc
# Which plugins would you like to load? (plugins can be found in ~/.oh-my-zsh/plugins/*)
# Custom plugins may be added to ~/.oh-my-zsh/custom/plugins/
# Example format: plugins=(rails git textmate ruby lighthouse)
# Add wisely, as too many plugins slow down shell startup.
plugins=(git)

plugins+=(zsh-nvm)

source $ZSH/oh-my-zsh.sh
  • If you want to install Yarn, use apt-get install --no-install-recommends yarn. By default, Yarn installs nodejs as a system-wide dependency.

After updating your .zshrc profile to load the nvm plugin, close and re-open the Ubuntu app and to install nvm when the plugin is loaded for the first time.

One the install has completed, you can verify by running nvm which should output the nvm --help contents.

Install the latest LTS version of Node.js which at the time of this writing is version 8.11.1

# install node
nvm install 8.11.1	
  • If you want to upgrade npm, use nvm install-latest-npm.

etc.

Bash-Snippets collection of bash scripts.

apt-get install fortune

apt-get install cowsay

apt-get install htop

Mount

Mount a drive such as a USB flash drive formatted as FAT, ExFAT or NTFS. For example, a drive listed as F:\ in Windows would be mounted as follows.

mkdir /mnt/f
mount -t drvfs f: /mnt/f

Bind

Bind a custom mount for any drives you want to access. For example, the C drive. This is also required should you want to use Docker volume mount paths as described in this post by Nick Janetakis.

Windows 10 17.09

mkdir /c
mount --bind /mnt/c /c

# unmount
umount /mnt/c
  • Included in Windows 10 version 1803 (April 2018 Update) is support for WSL launch configuration. The /etc/wsl.conf file contains settings for drive mounting and network configuration. Read the Microsoft Developer blog post, Automatically Configuring WSL for more information on how to use wsl.conf

Windows 10 18.03 WSL Configuration File

Create the /etc/wsl.conf file and set root = / so you can access drives with /c or /e instead of /mnt/c and /mnt/e.

wsl.conf
[automount]
root = /
options = "metadata"

File System

If you want to browse the files using Windows Explorer and/or backup the root file system, the location of these files is in the hidden AppData folder. For example:

%localappdata%\Packages\CanonicalGroupLimited.UbuntuonWindows_79rhkp1fndgsc\LocalState\rootfs

From Rich Turners blog post

Do not change Linux files using Windows apps and tools

More info available at WSL File System Support.

Resources

Nginx Reverse Proxy

Examples for using Jason Wilder’s Automated Nginx Reverse Proxy for Docker. This solution uses docker-compose files and Jason’s trusted reverse proxy image that contains a configuration using virtual hosts for routing Docker containers.

To set this up, create these directories in a project folder: nginx-proxy, whoami and an optional third one for a node.js app named nodeapp. For the nodeapp, create a Docker image using the these instructions.

  • project
    • nginx-proxy
      • docker-compose.yml
    • nodeapp
      • docker-compose.yml
    • whoami
      • docker-compose.yml

Docker Network

Create a Docker network named nginx-proxy to bridge all of the containers together.

docker network create nginx-proxy

In the nginx-proxy folder, create a new docker-compose.yml file as follows:

docker-compose.yml
version: '3'

services:
  nginx-proxy:
    image: jwilder/nginx-proxy
    restart: always
    ports:
      - "80:80"
    volumes:
      - /var/run/docker.sock:/tmp/docker.sock:ro

networks:
  default:
    external:
      name: nginx-proxy

In the whoami folder, create a new docker-compose.yml file as follows:

docker-compose.yml
version: '3'

services:
  app:
    image: emilevauge/whoami
    environment:
      - VIRTUAL_HOST=local.docker.whoami.com

networks:
  default:
    external:
      name: nginx-proxy

After building the myapp-node image as instructed here, create a new docker-compose.yml file nodeapp folder as follows:

docker-compose.yml
version: '3'

services:
  app:
    image: myapp-node
    user: "node"
    working_dir: /home/node/app
    environment:
      - NODE_ENV=production
      - VIRTUAL_HOST=local.docker.nodeapp.com
    expose:
      - "3000"
    command: "node app.js"

networks:
  default:
    external:
      name: nginx-proxy
  • Take note of the expose and VIRTUAL_HOST environment variables in the docker-compose.yml files for the app services. The nginx-proxy service uses these values to route traffic to the respective container.

Hosts

To test out the nginx-proxy virtual host routing locally, update your hosts file as follows:

127.0.0.1    local.docker.nodeapp.com
127.0.0.1    local.docker.whoami.com

Docker Compose

Use docker-compose up to build, (re)create, start, and attach containers for a service. We’re adding the -d option for detached mode to run the container in the background.

First, bring up the nginx-proxy container.

# change to the nginx-proxy directory
cd nginx-proxy

docker-compose up -d

Bring up the whoami container. Verify that it is running as expected in a web browser at http://local.docker.whoami.com

# change to the whoami directory
cd ../whaomi

docker-compose up -d

Bring up the nodeapp container. Verify that it is running as expected in a web browser at http://local.docker.nodeapp.com

# change to the nodeapp directory
cd ../nodeapp

docker-compose up -d

Use docker-compose down to stop and remove containers created by docker-compose up. Additionally, docker-compose stop can be used to stop a running container without removing it. Use docker-compose start to restart an existing container.

Add more folders and docker-compose.yml files as needed. The nginxy-proxy container is mounted to the host docker socket using this /var/run/docker.sock:/tmp/docker.sock volume. Every time a container is added, nginx-proxy has access to the events from the socket and creates the configuration file needed to route traffic and restart nginx making the changes available immediately.

Resources

Node.js Koa Container

An example of how to create a Docker container application using Koa.js Next generation web framework for Node.js.

In the project root, initialize using Yarn or npm.

yarn init -y

Install dependencies.

yarn add koa
yarn add koa-body
yarn add koa-logger
yarn add koa-router
yarn add koa-views
yarn add swig

Create an app folder in the project root.

In the app folder, create a folder named lib. Then create this render.js module in the new lib folder.

render.js
/**
 * Module dependencies.
 */

const views = require('koa-views');
const path = require('path');

// setup views mapping .html
// to the swig template engine

module.exports = views(path.join(__dirname, '/../views'), {
  map: { html: 'swig' }
});

In the app folder, create a folder for templates named views. Then create this index.html template in the new views folder.

index.html
<!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="UTF-8">
  <meta name="viewport" content="width=device-width, initial-scale=1.0">
  <meta http-equiv="X-UA-Compatible" content="ie=edge">
  <title>Document</title>
</head>
<body>
  <h1>{{content}}</h1>
</body>
</html>
  • Using Emmet, which is built into VS Code, you can create the index.html content by entering an exclamation mark on the first line then select the tab key.

In the app folder, create this app.js application entrypoint file.

app.js
const render = require('./lib/render');
const logger = require('koa-logger');
const router = require('koa-router')();
const koaBody = require('koa-body');

const Koa = require('koa');
const app = module.exports = new Koa();

// middleware

app.use(logger());

app.use(render);

app.use(koaBody());

// route definitions

router.get('/', index);

app.use(router.routes());

async function index(ctx) {
  await ctx.render('index', { content: 'Hello World' });
}

// listen

if (!module.parent) app.listen(3000);

Project structure

  • project
    • package.json
    • app
      • app.js
      • lib
        • render.js
      • views
        • index.html

Test the application locally in a browser at http://localhost:3000. Use Ctrl + C to kill the app after verifying it works.

cd app
node app.js

Docker

To containerize the application, create a docker-compose.yml file in the project root as follows.

docker-compose.yml
version: '3'

services:
  app:
    image: node:alpine
    user: "node"
    working_dir: /home/node/app
    environment:
      - NODE_ENV=production
    ports:
      - "3000:3000"
    volumes:
      - ./app:/home/node/app
      - ./node_modules:/home/node/node_modules
    expose:
      - "3000"
    command: "node app.js"

Build, (re)create and start, the container in disconnected mode. The app folder is attached as a volume and mapped to the working directory, /home/node/app in the container. The node app.js command is executed in the containers working directory.

docker-compose up -d

Test the application locally in a browser at http://localhost:3000. Use Ctrl + C to kill the app after verifying it works.

Stop and remove the container and volumes created by docker-compose up.

docker-compose down

Build a Docker image for better performance and deployment once initial development is completed. Instead of mapping the local app and node_modules folders to the container, copy files and folders into the container, set working directories and run commands as needed.

Create this Dockerfile in the project root

Dockerfile
FROM node:alpine
WORKDIR /home/node

# using wildcard (*) to copy both package.json and package-lock.json
COPY package*.json /home/node/
RUN yarn install --production

# create and set app directory as current dir
WORKDIR /home/node/app
COPY app/ /home/node/app/
EXPOSE 3000
CMD ["node", "app.js"]

Build the image and tag it. In the project root run the following command.

docker build -t myapp-node .

Test the new myapp-node Docker image using docker run. Same URL as before, http://localhost:3000.

docker run -u node -w /home/node/app -e NODE_ENV=production -p 3000:3000 --expose 3000 myapp-node node "app.js"

Stop the container using docker stop followed by the container ID. To get a list of all running containers, use docker ps --filter status=running.

That’s it!

VPS Proof of Concept for Docker and Traefik

This is a proof of concept for a VPS that includes ConfigServer Firewall (csf), Docker, Open SSH Server and Traefik as a reverse proxy to host multiple applications on the same Docker host.

The following notes document my experience while creating and configuring the VPS proof of concept local Virtual Machine with Ubuntu Server 16.04 on a Windows 10 host.

Virtual Machine

Since I am on my Windows 10 laptop for this, I used Hyper-V, an optional feature of Windows 10 Enterprise, Professional, or Education versions. Visit Install Hyper-V on Windows 10 | Microsoft Docs for more information on how to enable it. Virtual Machine creation from an iso image is fairly straight forward. More info at Create a Virtual Machine with Hyper-V | Microsoft Docs.

For installation, I downloaded the 64-bit Ubuntu Server 16.04.3 LTS (ubuntu-16.04.3-server-amd64.iso) bootable image from ubuntu.com/download/server.

  • Docker requires a 64-bit installation with version 3.10 or higher of the Linux kernel.

Create a Virtual Switch

Open Hyper-V Manger and select Virtual Switch Manager, and from there, select Create a Virtual Switch. For example, Name: WiFi Virtual Switch
Connection type: External Network Killer Wireless n/a/ac 1535 Wireless Network Adapter

  • With Hyper-V, To get external internet/network access with your VM, you need to create an External Virtual Switch. This will use your networks DHCP server, bridge mode if you will.

SSH Server

Install and configure OpenSSH. Once OpenSSH is installed, the virtual machine can be run headless and administered using secure shell (ssh) just as we would a VPS.

For a VPS, it is recommended that a non-root user with sudo privileges is used instead of root. Therefore, create a new user and add them to the sudo group. Instructions are available in the sudo section on my Linux page. After that’s done, disallow root password login.

ufw instead of csf

The default firewall configuration tool for Ubuntu is ufw. To use ufw instead of ConfigServer Firewall, configure and enable ufw prior to installing Docker. For example:

# allow incoming port 22
ufw allow ssh

# allow incoming ports
ufw allow 80
ufw allow 443

# turn it on
ufw enable

# verify
ufw status verbose

To                 Action      From
--                 ------      ----
22                 ALLOW IN    Anywhere
80                 ALLOW IN    Anywhere
443                ALLOW IN    Anywhere
22 (v6)            ALLOW IN    Anywhere (v6)
80 (v6)            ALLOW IN    Anywhere (v6)
443 (v6)           ALLOW IN    Anywhere (v6)

Docker

To install the latest version of Docker, add the GPG key for the official Docker Ubuntu repository as a trusted APT repository key.

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

Add the Docker repository to APT sources and update the package database.

sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

sudo apt-get update

Ensure that APT pulls from the correct repository.

apt-cache policy docker-ce

Install the latest version of Docker CE.

sudo apt-get install -y docker-ce

Docker should now be installed, the daemon started, and the process enabled to start on boot. Check that it’s running.

sudo systemctl status docker
systemctl status docker output

Docker Compose

To get the latest release, install Docker Compose from Docker’s GitHub repository. Visit https://github.com/docker/compose/releases to lookup the version number. Then use curl to output the download to /usr/local/bin/docker-compose. For example,

# check current release, update as needed
sudo curl -L https://github.com/docker/compose/releases/download/1.19.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose

# make executable
sudo chmod +x /usr/local/bin/docker-compose

# verify installation
docker-compose --version

Config Server Firewall (CSF)

Config Server Firewall contains a straight forward easy to understand configuration file. CSF also comes with a Login Failure Daemon that will alert you of large scale login attempts on ssh, mail and other servers. CSF also allows you to whitelist or blacklist IP addresses aside from the LFD real time monitoring and automatic IP blocking.

Disable the default firewall service.

sudo ufw disable

Since CSF is currently not available in the Ubuntu repositories, download it from the ConfigServer’s website into the home directory.

cd ~/

wget http://download.configserver.com/csf.tgz

Unpack the downloaded TAR archive.

tar -xvzf csf.tgz

Run the install script.

cd csf
sudo bash install.sh

Verify the installation,


sudo perl /usr/local/csf/bin/csftest.pl

If everything is fine, you should see the following output.

Testing ip_tables/iptable_filter...OK
Testing ipt_LOG...OK
Testing ipt_multiport/xt_multiport...OK
Testing ipt_REJECT...OK
Testing ipt_state/xt_state...OK
Testing ipt_limit/xt_limit...OK
Testing ipt_recent...OK
Testing xt_connlimit...OK
Testing ipt_owner/xt_owner...OK
Testing iptable_nat/ipt_REDIRECT...OK
Testing iptable_nat/ipt_DNAT...OK

RESULT: csf should function on this server

Optional cleanup: remove unpacked TAR files after the install has been verified.

cd ../
rm -rf csf

The next page covers CSF Docker configuration, basic auth for port specific password protection and Traefik Docker configuration.

Docker Laravel Dev Environment

This post documents building a local Laravel development environment with Docker. Included are examples for debugging Laravel’s PHP with Xdebug using the Visual Studio Code editor. Source Code available on GitHub.

Install Laravel

In this example, we will be using the Composer Dependency Manager for PHP to install Laravel. To check if Composer is installed globally and in your PATH, enter composer --version in the CLI, for example,

composer --version
Composer version 1.4.1 2017-03-10 09:29:45

Get Composer if you need to install it.

With Composer, use the create-project command and the laravel/laravel package name followed by the directory to create the project in. The optional third argument is for a version number. For example, to install Laravel version 5.5 into a local directory such as home/laravel/mysite on the host computer, open a CLI then execute the composer create-project command from the directory where you want to create your project. For example:

cd ~/laravel

composer create-project laravel/laravel mysite "5.5.*"

Docker

Once composer is finished creating the Laravel project, we are ready to create the docker containers to host it. The following examples require both Docker and Docker Compose. Head on over to Docker, select Get Docker and the platform of your choice to install Docker on your computer. This should install both Docker and Docker Compose. The examples in this post were written while using Docker Community Edition, Version 17.09.0-ce.

Start Docker as needed and test the installation. Open a CLI and issue these commands.

docker --version

docker-compose --version

For the Windows platform with IIS installed, if port 80 is conflicting, shut down the IIS web server. Open an admin command prompt and enter net stop was

PHP Container

This dockerfile builds from the official Docker Hub PHP image. At the time of this writing, php 7.1.10 was the latest non RC version available in the library. If you want to use a different version, change the php version as needed, such as FROM php:5.6.31-fpm.

Create the app.dockerfile in the root of the laravel project. For example, home/laravel/mysite/app.dockerfile

app.dockerfile
FROM php:7.1.10-fpm

# php-fpm default WORKDIR is /var/www/html
# change it to /var/www
WORKDIR /var/www

RUN apt-get update && apt-get install -y \
    libmcrypt-dev \
    mysql-client --no-install-recommends \
    && docker-php-ext-install mcrypt pdo_mysql \
    && pecl install xdebug \
    && echo "zend_extension=$(find /usr/local/lib/php/extensions/ -name xdebug.so)\n" >> /usr/local/etc/php/conf.d/xdebug.ini \
    && echo "xdebug.remote_enable=1\n" >> /usr/local/etc/php/conf.d/xdebug.ini \
    && echo "xdebug.remote_autostart=1\n" >> /usr/local/etc/php/conf.d/xdebug.ini \
    && echo "xdebug.remote_connect_back=0\n" >> /usr/local/etc/php/conf.d/xdebug.ini \
    && echo "xdebug.remote_host=10.0.75.1\n" >> /usr/local/etc/php/conf.d/xdebug.ini \
    && echo "xdebug.remote_port=9001\n" >> /usr/local/etc/php/conf.d/xdebug.ini \
    && echo "xdebug.idekey=REMOTE\n" >> /usr/local/etc/php/conf.d/xdebug.ini

For Mac platforms, change the app.dockerfile xdebug remote host IP, e.g., xdebug.remote_host=10.254.254.254

Then create an alias for IP 10.254.254.254 to your existing subnet mask as follows:

sudo ifconfig en0 alias 10.254.254.254 255.255.255.0

If you want to remove the alias, use sudo ifconfig en0 -alias 10.254.254.254

Nginx Container

Create a nginx server block configuration file for php and the Laravel project root.

nginx.conf
server {
    listen 80;
    index index.php index.html;
    root /var/www/public;

    location / {
        try_files $uri /index.php?$args;
    }

    location ~ \.php$ {
        fastcgi_split_path_info ^(.+\.php)(/.+)$;
        fastcgi_pass app:9000;
        fastcgi_index index.php;
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_param PATH_INFO $fastcgi_path_info;
    }
}

This dockerfile builds from the official Docker Hub nginx image adding the nginx.conf Nginx server block configuration file to /etc/nginx/conf.d on the container.

web.dockerfile
FROM nginx:1.12

ADD nginx.conf /etc/nginx/conf.d/default.conf

The next page covers creating the docker-compose file, container build, create, start and stop. Xdebug and launch config for VS Code PHP debugging.


Remote Device Dev Environment Access

There are various ways to connect a mobile device to the computer running the local development instance of your web application for testing and debugging. A proxy server and/or USB cable are popular options.

Proxy Server

This method is pretty reliable and does not limit you to using browser specific settings on either the host or remote. The host computer will need HTTP proxy server software installed for proxying requests and responses. Fiddler and Charles are popular choices for this.

Both listen on port :8888 by default.

Fiddler - Tools - Options Screen
Fiddler – Tools – Options
Charles - Proxy - Proxy Settings
Charles – Proxy – Proxy Settings

Requirements – HTTP proxy server application software installed on the host computer.

Lookup the IPv4 local IP address of the dev environment host computer.

Command prompt

ipconfig /all

Terminal

# wifi
ipconfig getifaddr en1

# list 
ifconfig | grep inet

  • On a Mac, a quick way to get the local IP is by holding down the option key while clicking on the network icon in the system tray.

Device Settings

On the mobile device connected to the same Wi-Fi network as the host, modify the connection by entering the proxy server port and host computer local IP address.

Under Settings – Wi-Fi, long-press the connected network SSID, select modify network. Under the advanced options | proxy, select manual. For the Proxy hostname enter the host local IP address. Enter the proxy server port number and save.

Device - Network - Proxy Settings
Device – Network – Proxy Settings

Direct Connection with USB Cable

If you’re using Google’s Chrome browser on both the device and host. This resource covers how to use the browser developer tools to test and debug using a USB cable. I like using this method along with one of the proxy servers above – Dev Tools – Access Local Servers.

Requirements – Chrome 32 or later and USB drivers installed on the host computer. For Windows, check for the correct driver using Device Manager. Android 4.0 or later running Chrome for Android on the device with USB debugging enabled.

Resources

Docker Drupal Dev Environment

This post documents mounting a new Drupal Composer project as a volume in a Docker container. Features include Drush, Drupal Console, mailhog and phpMyAdmin. A Docker-sync configuration is available for OS X.

Composer

With the release of Drupal 8.3, using Composer to manage Drupal projects has vastly improved and is becoming a best practice. See Getting Started at getcomposer.org to install Composer.

Drupal Composer Template

Using Composer and the Composer template for Drupal projects to create a new Drupal site.

Open a CLI and execute the composer create-project command from the directory where you want to create your project. For example,

composer create-project drupal-composer/drupal-project:8.x-dev mysitefolder --stability dev --no-interaction

Docker

The docker-compose.yml file from Docker4Drupal image stack is optimized for local development. Use curl to download the file into the root of your Drupal project. For example,

PowerShell
curl https://raw.githubusercontent.com/wodby/docker4drupal/master/docker-compose.yml -Outfile mysitefolder\docker-compose.yml
Bash
curl -o mysitefolder/docker-compose.yml  https://raw.githubusercontent.com/wodby/docker4drupal/master/docker-compose.yml

Update the docker-compose.yml file. Create a named volume for data persistence in the mariadb node.

docker-compose.yml
...
services:
  mariadb:
    ...
    volumes:
      - mysql:/var/lib/mysql
  • Note that an ellipsis … in the code snippets are not a part of the code and are there only to denote code that is being skipped and not applicable to the example. View all of the docker-compose.yml updates on GitHub.

In the php node, comment out the vanilla Drupal image node and uncomment the image without Drupal. Additionally, change the volume to mount the relative local directory ./ to /var/www/html in the container.

...

  php:
    image: wodby/drupal-php:7.1-2.1.0
    ...
    volumes:
      - ./:/var/www/html

In the nginx node, change the volume to mount the relative local directory ./ to /var/www/html in the container.

...

  nginx:
    ...
    volumes:
      - ./:/var/www/html

For data persistence, in the volumes node at the bottom of the docker-compose.yml file, replace the unused codebase volume with mysql.

...

  volumes:
    mysql:

Run containers

From the site folder, e.g., mysitefolder, execute docker-compose

docker-compose up -d

Drupal Console

Test drive Drupal Console by connecting to the php container.

docker-compose exec --user=82 php sh

List all of the Drupal Console commands.
Disconnect from the session with Ctrl+D

drupal list
Source Code

Resources

Acquia Certified Site Builder – Drupal 8

Earlier this week I took and passed the Acquia Certified Site Builder exam for Drupal 8. Here are some tips and a few resources that helped me prepare for this 50 question, 75 minute exam.

Acquia Academy

These free Drupal courses for the community were a big help. It has been several years since I have done any site building with Drupal and it was like going back to my home town, barely recognizable. The exam questions are mostly scenario based and the Building a Basic Site Using Drupal 8 course covers some of these. The videos also walk through the various administration menus that you will need to be familiar with.

wadmiraal/drupal

For the Acquia Academy Site Builder course, I used the Docker image created by wadmiraal to run Drupal locally. It was also handy for experimentation since the Drupal site could easily be reset to that of a fresh install. If you are already familiar with Docker, then this won’t add much to your plate on top of preparing for the Site Builder exam. Otherwise, I recommend sticking with the Drupal environment solutions covered in the courses. Another bonus is that this image comes with Devel and Drush already setup. Drupal Console is also ready for installation when advancing to the developer track.

Keys

Here are few tips and scenarios to consider when studying.

  • Understand how to create custom content types and how to modify their display.
  • Understand Taxonomy use for content categorization.
  • Understand how to create content relationships. There are good examples using Events and Sponsors in the Acquia Academy Site Builder course to demonstrate this.
  • Understand how to use blocks in the page layout. For example, using blocks to reuse header and footer content assets in various ways.
  • Understand Users, Roles and Permissions and how to change what a certain group of users can do in terms of content workflow.
  • Understand how to handle a compromised User account.
  • Understand how to handle comment spam.
  • Understand how to administer the core cache modules, including views data caching that is tucked away under the Advanced Views settings.
  • Understand the various Display modes.
  • Understand how to create pages and blocks with Views.
  • Understand best practices when communicating with the Drupal community. For example, when an issue is encountered with a module.
  • The Site Builder course at Acquia Academy uses a couple of contributed modules. The Devel module is used to generate a bunch of content nodes to demonstrate views and filters, etc. Pathauto is used to demonstrate generation of path aliases. Token and Ctools are required by Pathauto, so they also need to be installed. It is useful to know where to find and how to install modules. However, if a given scenario can be solved using core modules, that answer is preferred.
Acquia Certified Site Builder - Drupal 8 Badge

Other Resources

Docker WordPress Dev Environment

This post documents setting up local development environments with Docker using the official WordPress Docker repository as a base image. Page two includes configurations for remote PHP debugging with Xdebug and VS Code.

Docker Compose

Aside from making the container configuration easier to understand, Docker Compose coordinates the creation, start and stop of the containers used together in the environment.

The environment has multiple sites served on various ports including at least one WordPress site, a phpmyadmin site and mysql. For virtual host names, nginx-proxy is being used for containers with virtual host environment properties set in their docker compose data. Additionally, the mysql db container is accessible from the host at 127.0.0.1:8001

Create this docker compose yaml file in the projects root directory with the nginx-proxy configuration.

nginx-proxy.yml
version: "2"
services:
  nginx-proxy:
    image: jwilder/nginx-proxy
    container_name: nginx-proxy
    ports:
      - "80:80"
    volumes:
      - /var/run/docker.sock:/tmp/docker.sock:ro

Create this docker compose yaml file for the WordPress stack. This includes the linked MariaDB database and phpMyAdmin containers from their official repositories. Xdebug is not included in the official WordPress image on Docker Hub and will not be included in this configuration since it is using unmodified images. Adding xdebug and rebuilding the image is covered on page two.

wp.yml
version: "2"
services:
  db:
    image: mariadb
    volumes:
      - mysql:/var/lib/mysql
    ports:
      - "8001:3306"
    environment:
      - MYSQL_ROOT_PASSWORD=secret
  phpmyadmin:
    image: phpmyadmin/phpmyadmin:latest
    ports:
      - "8002:80"
    links:
      - db:mysql
    environment:
      - MYSQL_ROOT_PASSWORD=secret
      - VIRTUAL_HOST=phpmyadmin.app
      - VIRTUAL_PORT=8002
  wp:
    image: wordpress
    volumes:
      - ./wordpress:/var/www/html
    ports:
      - "8003:80"
    links:
      - db:mysql
    environment:
      - WORDPRESS_DB_PASSWORD=secret
      - VIRTUAL_HOST=wordpress.dev
      - VIRTUAL_PORT=8003
volumes:
  mysql:

Update your systems hosts file.

hosts
# Docker (nginx-proxy)
127.0.0.1 phpmyadmin.app
127.0.0.1 wordpress.dev

Navigate to your project root in your CLI, such as Terminal on OS X, PowersShell or Cygwin on Windows.

Create the Containers

Create new nginx-proxy and WordPress containers using the up command with docker-compose.

docker-compose -f nginx-proxy.yml -f wp.yml up
  • The -f flags specify the compose files to use. Multiple compose files are combined into a single configuration. This multiple file solution is for demonstration purposes. Here is a single file example that can be run without a file flag.

Stop Containers

Stop the containers without removing them.

docker-compose -f wp.yml -f nginx-proxy.yml stop

Start Containers

Start the stopped containers. Include the nginx-proxy.yml first so when the WordPress containers are started the virtual hosts can be dynamically configured.

docker-compose -f nginx-proxy.yml -f wp.yml -f start
  • If you have restarted your computer and another process is using the nginx-proxy port, e.g., 80, you will need to halt that process before starting the container.

Shutdown Containers

Shutdown the environment using the down command. If your data is not stored in a volume, it will not persist since this will remove the containers.

docker-compose -f wp.yml -f nginx-proxy.yml down

The next page covers adding Xdebug and configuring VS Code for remote debugging.


Google Maps API with Webpack

Google Map application that uses a draggable location marker to set address, longitude and latitude geocode inputs. This post covers some of the Webpack tools for both a local development environment and production build of the app constructed of a few JavaScript modules and css.

For application details, I encourage you to read my Google Maps API with Browserify post from a year ago. The following overview highlights building the refactored code with Webpack. This refactor brings the application more up to date with ES6 standards. Other changes include the removal of jQuery as a dependency. All of the source code is available for download and browsing at GitHub.

NPM

Here is the updated package.json for installing webpack, loaders and other dependencies. This file also defines scripts to start and run commands.

package.json
{
  "name": "gmap-webpack",
  "version": "0.0.1",
  "description": "location inputs populated by google maps api, built with webpack",
  "main": "",
  "scripts": {
    "start": "cross-env NODE_ENV=development webpack-dev-server --progress --inline --open",
    "build": "cross-env NODE_ENV=production webpack"
  },
  "devDependencies": {
    "babel-core": "^6.0.0",
    "babel-loader": "^6.0.0",
    "babel-preset-es2015": "^6.0.0",
    "cross-env": "^3.2.3",
    "css-loader": "^0.25.0",
    "style-loader": "^0.13.2",
    "webpack": "^2.2.1",
    "webpack-dev-server": "^2.4.1"
  }
}

The npm start command is using the cross-env plugin to set the Node environment variable properly for the platform. The webpack-dev-server then bundles the modules and launches a static web server inline for live reloading.

The npm run build command uses cross-env to set the environment flag. Then webpack bundles the modules according to the webpack.config below optimized with a source-map for production.

webpack.config
const path = require('path')
const webpack = require('webpack')

module.exports = {
    context: path.resolve(__dirname, './src'),
    entry: {
        app: './js/index.js',
    },
    output: {
        path: path.resolve(__dirname, './dist'),
        publicPath: '/dist/',
        filename: 'bundle.js'
    },
    devtool: '#eval-source-map',
    module: {
        rules: [{
            test: /\.js$/,
            loader: 'babel-loader',
            exclude: /node_modules/
        },
        {
            test: /\.css$/,
            use: ['style-loader','css-loader']
        }]
    }
}

if (process.env.NODE_ENV === 'production') {
  module.exports.devtool = '#source-map'
  module.exports.plugins = (module.exports.plugins || []).concat([
    new webpack.DefinePlugin({
      'process.env': {
        NODE_ENV: '"production"'
      }
    }),
    new webpack.optimize.UglifyJsPlugin({
      sourceMap: true,
      compress: {
        warnings: false
      }
    }),
    new webpack.LoaderOptionsPlugin({
      minimize: true
    })
  ])
}

In the webpack.config file, the output.publicPath property is used by the webpack-dev-server to determine where the bundles should be served from.

The devtool property controls if and how source maps are generated. eval-source-map is for faster rebuild when in development. In production mode, devtool is set to use the appropriate source-map style instead.

To test drive the app and experiment with the dev server and live reloading, follow these steps assuming you have Node.js installed.

  1. Download and extract the source code or use git to clone the uiCookbook repository from https://github.com/jimfrenette/uiCookbook
  2. Navigate to the uiCookbook/geocode/gmap-webpack folder in your CLI, such as terminal or Cygwin.
  3. Run npm i or npm install
  4. Run npm start to bring up the dev server, bundle the modules and load the app in the browser.

Source Code

A Vue.js version of the Google Maps applicaton is available in the geocode/gmap-vue directory. The vue-cli webpack-simple scaffold was used to generate the Vue.js 2 project template.

Node.js

A collection of useful Node.js code snippets and resources.

HTTP Server

An easy way to get a Node.js http server up and running is by installing the http-server node module globally, npm install http-server -g. Here is an example using the -a option to specify an address to use. For more options, run http-server --help.

http-server -a localhost
  • http-server default is to use address 0.0.0.0 which is reachable by all IP addresses of the local machine’s network interface. Therefore, it may not be safe to run this address on a public network.

Shell Scripting

A collection of shell scripting resources.

Shell scripts are plain text files with an .sh extension.

If permission denied, the shell script may just need to be made executable. Example: for a script named backup.sh, using chmod u+x backup.sh will grant only the owner of the file execution permissions. For all users, replace the u with an a. You could also use chmod +x backup.sh which is the same as chmod a+x backup.sh.

chmod u+x backup.sh

Then try running the script again.

./backup.sh

rsync

This OS X example uses rsync to backup the Documents folder on the hard drive to an existing Documents folder on a Lexar USB stick. The --delete-before option deletes files from the target that do not exist in the source before rsync.

if [ -d "/Volumes/Lexar/Documents" ];
then
    rsync -avP "/Volumes/Macintosh HD/Users/Woz/Documents/" "/Volumes/Lexar/Documents/" --delete-before
fi

Use the --exclude option to prevent specific files and folders from being synced. This example excludes .DS_Store files, .git and node_modules folders from the backup.

if [ -d "/Volumes/Lexar/Code" ];
then
    rsync -avP --exclude .DS_Store --exclude .git --exclude node_modules "/Volumes/Macintosh HD/Users/Dinesh/Code/" "/Volumes/Lexar/Code/"
fi
  • These shell script examples are written for Cygwin and /cygdrive/c is how the C: drive of Windows is accessed. For Cygwin, the rsync package would need to be installed. On OS X and Linux, rsync is usually included in the system install.

This example uses rsync to backup an existing file to a server share.

if [ -f /cygdrive/c/Users/Gilfoyle/Dropbox/Private/Hooli.xlsx ];
then
    rsync -avP /cygdrive/c/Users/Gilfoyle/Dropbox/Private/Hooli.xlsx //FILESERVER/share/Hooli.xlsx
fi

This example uses rsync to backup an existing Private Dropbox folder to a server share.

if [ -d /cygdrive/c/Users/Gilfoyle/Dropbox/Private ];
then
    rsync -avP /cygdrive/c/Users/Gilfoyle/Dropbox/Private/ //FILESERVER/share/
fi

tar

This example checks for the existing source directory and creates a compressed archive in the current directory with a datetime string included in the filename. The source directory and its contents will be added to the archive while exluding node_modules .git and sass.cache

if [ -d /cygdrive/c/Users/Dinesh/Code ];
then
    now=$(date +"%Y-%m-%dT%H%M%S")
    tar -zcvf code_$now.tar.gz --exclude .git --exclude node_modules --exclude sass.cache /cygdrive/c/Users/Dinesh/Code
fi

dos2unix

Find all files inside the current directory and execute dos2unix on each converting DOS format line breaks (CRLF) to Unix format (LF).

find . -type f -exec dos2unix {} \;

unix2dos

Find all files inside the current directory and execute unix2dos on each converting Unix format line breaks (LF) to DOS format (CRLF).

find . -type f -exec unix2dos {} \;

Resources

Windows PowerShell

A collection of Windows PowerShell resources

Customization / Modules

PowerShell modules can enhance functionality and the user interface. In this example, the profile is updated to use git over an SSH connection, readline settings for a bash like experience, custom color output and a custom prompt.

My Customized PowerShell

Execution Policy

This command sets the PowerShell execution policy.

If you encounter an error such as …

{script path} cannot be loaded.
The file {script path} is not digitally signed.
You cannot run this script on the current system.
For more information about running scripts and setting execution policy,
see about_Execution_Policies at
http://go.microsoft.com/fwlink/?LinkID=135170.

You can set a Bypass execution policy for the current session.

Set-ExecutionPolicy -Scope Process -ExecutionPolicy Bypass

Menu Example

This PowerShell script contains a menu to execute various tasks using functions.

UtilityMenuExample.ps1
function ListProcess
{
    Get-Process | Group-Object Company | Sort-Object Count -Descending
}

function ListEnvVars
{
    Get-ChildItem env:
}

function ListEventLog
{
    Get-EventLog -List
}

function Cleanup
{
    Write-Host "Delete files from $env:temp older than 24 hours"
    Get-ChildItem -path $env:temp | where {$_.Lastwritetime -lt (date).addhours(-24)} | remove-item
    <# Clear-RecycleBin #>
    $Shell = New-Object -ComObject Shell.Application
    $RecBin = $Shell.Namespace(0xA)
    $RecBin.Items() | %{Remove-Item $_.Path -Recurse -Confirm:$false}
}

function ShowMenu
{
     param (
           [string]$Title = 'Menu'
     )
     Write-Host "====== $env:USERPROFILE - $Title ======="

     Write-Host "1: List Running Processes"
     Write-Host "2: List Environment Variables"
     Write-Host "3: List Event Log"
     Write-Host "4: Clean Temp and Recycle Bin"
     Write-Host "q: quit"
}

do
{
     ShowMenu
     $input = Read-Host "Please make a selection"
     switch ($input)
     {
        '1' {
            Clear-Host
            ListProcess
        }
        '2' {
            Clear-Host
            ListEnvVars
        }
        '3' {
            Clear-Host
            ListEventLog
        }
        '4' {
            Clear-Host
            Cleanup
        }
        'q' {
            return
        }
     }
     pause
}
until ($input -eq 'q')

Compress-Archive

This example creates a zip file of the Documents folder with a datetime string in the zip filename.

Compress-Archive "$env:USERPROFILE\Documents" "$env:USERPROFILE\Documents_$(get-date -f yyyyMMdd'T'HHmmss).zip"

Setting System Environment Variables

This PowerShell script accepts parameters for setting system environment variables for Java development in Windows.

java.ps1

Resources

Cygwin Oh My ZSH Recipe

This post documents my Cygwin + Oh My ZSH configuration which gives me a consistent cross platform terminal experience on Windows, OS X and Linux.

Web development workflow with git, node, npm, docker, vagrant, etc. is more comfortable when using the same familiar bash shell interface across different operating systems.

I don't always use the command line in Windows - But when I do, I prefer Cygwin

Install Cygwin

I used these settings with the Cygwin for 64-bit versions of Windows.

    Cygwin Setup
  1. Choose A Download Source: Install from Internet
  2. Select Root Install Directory: C:\Cygwin
    Install For All Users
  3. Select Local Package Directory: C:\Users\{UserName}\AppData\Local\Cygwin
    note: if your %userprofile% contains spaces, for example: C:\Users\Jared Dunn, this may cause issues.
  4. Select Your Internet Connection: Direct Connection
  5. Choose A Download Site: http://mirrors.xmission.com
    This mirror was reliable for me. They should generally all be up to date. It is recommended that you select a mirror site that is closest to you. Visit the Cygwin Mirror Sites page for more information.
  6. Select Packages:
    • curl: Muti-protocol file transfer tool
    • fzf-zsh: fzf key bindings for Z shell
    • fzf-zsh-completion: fzf completion for Z shell
    • git: Distributed version control system
    • gitk: Git repository browser
    • zsh: The Z-Shell

Configure Cygwin Home Directory

Make your Cygwin home directory the same as your Windows User profile.

Edit the Cygwin /etc/nsswitch.conf file.

Add or edit the following line: db_home: /%H

more info: Stack Overflow – How can I change my Cygwin home folder after installation?

Install Oh My ZSH

Launch Cygwin and install Oh My ZSH using curl

sh -c "$(curl -fsSL https://raw.github.com/robbyrussell/oh-my-zsh/master/tools/install.sh)"

I received this output while Oh My ZSH install completed:

Looking for an existing zsh config...
Using the Oh My Zsh template file and adding it to ~/.zshrc
Copying your current PATH and adding it to the end of ~/.zshrc for you.
I can't change your shell automatically because this system does not have chsh.
Please manually change your default shell to zsh!

Edit the Cygwin /etc/nsswitch.conf file.

Add or edit the following line: db_shell: /bin/zsh

More info: Stack Overflow – Set Default Shell in Cygwin

Mintty Configuration

This section documents some additional personalization preferences. Access the options by selecting the Cygwin application icon in the Title bar. For Looks, I have medium transparency selected.

Cygwin - Options - Looks
Options – Looks

For text, I have the Fira Code fonts installed on my system so have elected to use them and since my display is UHD, I am using the retina version. I have also set the font smoothing to full which is equivalent to subpixel anti-aliasing (“ClearType”).

Cygwin - Options - Text
Options – Text

Lastly, I wanted to add some spacing between the window borders and command text. I discovered that this could be done by editing the mintty configuration file in my home directory. I added the following line to the file, Padding=16. The entire configuration file is shown below.

~/.minttyrc
BoldAsFont=-1
Columns=96
Rows=48
Transparency=medium
Padding=16
Font=Fira Code Retina
FontWeight=450
FontSmoothing=full

Test drive using this handy cal 2017 command to display a calendar.

cal 2017

Other Useful Tips & Tricks

Set the default editor for git

# nano
git config --global core.editor C:/Cygwin/bin/nano.exe

# vi
git config --global core.editor C:/Cygwin/bin/vi.exe

Unsupported Bracketed Paste Mode

When pasting (Shift + Insert), if weird characters are wrapping the clipboard contents, disable bracketed paste mode in the .zshrc since the terminal doesn’t support escape sequences.

Unsupported ZLE Bracketed Paste In this figure, unexpected characters ^[[200~ ~ are wrapping the pasted string.
~/.zshrc
if [[ $TERM = dumb ]]; then
  unset zle_bracketed_paste
fi

Resources

Creating a Custom WordPress Page Template

I wanted a front-end page template to play around with the new REST API that is now in WordPress 4.7 core. A custom page template gives me the latitude I want at this stage of development, so here is how I did it.

Create the Template File

There are a couple of ways to do this.

Option 1

Clone and edit the existing page.php template. Open page.php in an editor and Save As… api-test.php. Remove the guts of the page containing the while loop for iterating through and displaying posts. For example, my clone of wp-content/themes/twentyseventeen/page.php saved as api-test.php looks like this:

api-test.php
<?php

get_header(); ?>

<div class="wrap">
    <div id="primary" class="content-area">
        <main id="main" class="site-main" role="main">


        </main><!-- #main -->
    </div><!-- #primary -->
</div><!-- .wrap -->

<?php get_footer();

The original page meta data properties in the comment at the top have been removed.

Option 2

Start from scratch with a completely blank slate. Create a new php file in the root of your active theme with a single line of code to set the Template Name meta data. Now we have a new custom template that can selected for pages when editing.

api-test.php
<?php /* Template Name: api-test */ ?>

Option 3

This is nearly the same as option 1, only with addition of the meta data property to set the Template Name property like option 2.

Clone and edit the existing page.php template. Open page.php in an editor and Save As… api-test.php. Edit api-test.php replacing the meta data at the top with the Template Name property used in the blank slate option above. Then remove the guts of the page containing the while loop for iterating through and displaying posts. For example, my clone of wp-content/themes/twentyseventeen/page.php saved as api-test.php looks like this:

api-test.php
<?php /* Template Name: api-test */

get_header(); ?>

<div class="wrap">
    <div id="primary" class="content-area">
        <main id="main" class="site-main" role="main">


        </main><!-- #main -->
    </div><!-- #primary -->
</div><!-- .wrap -->

<?php get_footer();

Create the Page Slug

Login into the dashboard and add a new page.

If using Option 1, update the permalink slug to match the page filename. For example, api-test

WordPress Page Permalink Slug
api-test page slug

Select the new template if using options 2 or 3.

WordPress Page Attributes - Custom Template
select api-test template

Resources

WordPress

Custom Dashboard

WordPress plugin stub to customize the dashboard for users without administrator access.

Source Code

Custom Homepage

Clone and edit the existing theme index.php template. Open index.php in an editor and Save As… home.php. This file will automatically take over the themes index.php, and it will be displayed as the homepage.


Custom Widget

Step 1 – Copy Core Widget into your Theme

Copy the WordPress widget php file from wp-includes/widgets into your theme. This example copies the pages widget, class-wp-widget-pages.php into a widgets folder in mytheme.

cp wp-includes/widgets/class-wp-widget-pages.php wp-content/themes/mytheme/widgets/class-wp-widget-pages-custom.php

Step 2 – Edit the Widget

Update the widget, for example, rename the class.

>?php
/**
 * Widget API: WP_Widget_Pages_Custom class
 *
 * @package WordPress
 * @subpackage Widgets
 * @since 4.4.0
 */

/**
 * Modified copy of Core WP_Widget_Pages class used to implement a Pages widget.
 *
 * @since 2.8.0
 *
 * @see WP_Widget
 */
class WP_Widget_Pages_Custom extends WP_Widget {

Step 3 – Register the Widget

In your themes functions.php file, require the customized widget file and register it. For example,

namespace mytheme;

...

// modified copy of WP_Widget_Pages class
require_once get_template_directory() . '/widgets/class-wp-widget-pages-custom.php';

function widget_pages_custom_init(){
  register_widget('WP_Widget_Pages_Custom');
}

// register modified copy of WP_Widget_Pages class
add_action("widgets_init", __NAMESPACE__ . '\\widget_pages_custom_init');

Function Snippets

Various snippets for functions.php

Disable automatic <p> tags.

remove_filter( 'the_content', 'wpautop' );
remove_filter( 'the_excerpt', 'wpautop' );

Reference: codex.wordpress.org/Function_Reference/wpautop#Disabling_the_filter


Development Environments

Docker

WordPress from Development to Production using Docker

Vagrant

Laravel Homestead
If you develop Laravel apps on Homestead, you can also install WordPress on the same Virtual Machine. This example Homestead.yaml file has a configuration for 4 different sites. Two Laravel sites, a WordPress site and a phpmyadmin site.

Vagrant configuration designed for development of WordPress plugins, themes, or websites.

http://vccw.cc/

What’s Included

Git Setup

To provision from the master branch,

git clone https://github.com/vccw-team/vccw.git

cd vccw

cp provision/default.yml site.yml

# edit php.ini settings in site.yml as needed

vagrant up
vccw.cc zip download link uses release branch.

XAMPP, MAMP, IIS


Development Resources

wp-config.php

REPLACE

define('WP_DEBUG', false);
WITH
// Enable WP_DEBUG mode
define( 'WP_DEBUG', true );

// Enable Debug logging to the /wp-content/debug.log file
define( 'WP_DEBUG_LOG', true );

// Disable display of errors and warnings 
define( 'WP_DEBUG_DISPLAY', false );
@ini_set( 'display_errors', 0 );

Log Your Debug Statements

// array or object variable
error_log(print_r($myvariable, true));

// string
error_log(print_r($mystring));