Let’s Make Wildcard Certificates with Certbot, Docker, and Route53

In case you haven’t heard, Let’s Encrypt now supports wildcard certificates as a feature of the new ACME v2 protocol.¬†However, current client support is still somewhat limited, as the Let’s Encrypt CA requires domain validation via DNS-01 challenge. To further complicate things, DNS-01 requires programmatic access to your nameservers. But let’s assume you are already using Route53 and you’re looking for the simplest way to begin issuing wildcard certificates for your hosted zones. You’ll need an up-to-date ACME client, such as the latest version of Certbot. Chances are your server distro is not that bleeding-edge. That’s where Docker comes in.

Let’s take a look at how to quickly set up a Docker container for Certbot to issue wildcard certificates via Let’s Encrypt.

What You’ll Need

You’ll need a few things to get started:

  • A domain name set up to use Amazon Route53 nameservers.
  • A set of AWS credentials configured with the appropriate Route53 permissions (details below).
  • A functioning Docker instance on your web server.

Create the Docker Script

Let’s start by creating a working directory for our Docker image:

$ mkdir ~/certbot-docker
$ cd ~/certbot-docker

Now throw the following code into a file named ~/certbot-docker/


sudo docker run -it --rm --name certbot \
--env AWS_CONFIG_FILE=/etc/aws-config \
-v "${PWD}/aws-config:/etc/aws-config" \
-v "/etc/letsencrypt:/etc/letsencrypt" \
-v "/var/lib/letsencrypt:/var/lib/letsencrypt" \
certbot/dns-route53 certonly --server

Notice the part where it mentions the file ${PWD}/aws-config. Next we need to create that file using your AWS API credentials.

Provide Your AWS API Credentials

The ~/certbot-docker/aws-config file should look like this:


If you haven’t set up the API credentials yet, login to the AWS IAM console and create a group with the following policy:

  "Version": "2012-10-17",
  "Statement": [
      "Sid": "VisualEditor0",
      "Effect": "Allow",
      "Action": [
      "Resource": "*"
      "Sid": "VisualEditor1",
      "Effect": "Allow",
      "Action": "route53:ChangeResourceRecordSets",
      "Resource": "arn:aws:route53:::hostedzone/*"

Special thanks to the author of this forum post for pointing me in the right direction.


AJAX Efficiency with jQuery and HTML5

It’s super-common in front-end development to load a lot of content dynamically via AJAX. It’s tempting to rely on jQuery selectors to initialize these elements. We’ve all seen it: a huge chunk of code with twenty anonymous functions, each one a callback to a jQuery `.each()` method. We like jQuery selector callbacks because they’re convenient–no code is executed unless a match is found. It’s a built-in if-statement! But querying the DOM is an expensive operation which can slow down the loading of your site and drive away users. It’s also lazy coding.

To counter this, consider offloading initializations to a single, generic callback that will minimize the performance hit while retaining the convenience of selector-based initialization. That’s a mouthful! In other words, let’s create a master function to call all the others for us–a `MapReduce()` for the front-end, if you will. Here’s how.

## Example Scenario

Let’s say we’re making a page with two dynamic blocks of content: *Recent Tweets* and *Upcoming Events*.

[codepen_embed height=”430″ theme_id=”0″ slug_hash=”djIzv” default_tab=”result” user=”njbair”]See the Pen djIzv by Nick Bair (@njbair) on CodePen.[/codepen_embed]

Normally, the static page markup might look something like this:

And the JavaScript:

// load some ajax or something

// load some ajax or something

So we have two selectors, `recent-tweets` and `upcoming-events`, that essentially do the same thing–query the DOM tree for an element with the given class name and, if found, perform the specified function. Not a big deal when dealing with only two selectors as in this example, but as we add more and more dynamic blocks the performance hit adds up quickly, especially on low-powered devices like smartphones.

What if we could boil this down to a single selector? Like so:

Then we could initialize both blocks with `$(‘.init’).each()`. That would be cool! But we need some way to tell the function what code to run on each block. Enter the HTML5 `data-` attribute:

We’ve created a `data-init` attribute which contains a unique identifier for each block. In our JavaScript, we use those identifiers as the names of functions. Then we replace both selectors with a single, master selector. Observe:

function loadRecentTweets(theElement)
// load recent tweets from the local REST API
// display the data inside the container element

function loadUpcomingEvents(theElement)
// load upcoming events from the local REST API
// display the data inside the container element

// master selector
var initFunction = $(this).data(‘init’);

// this funny-looking line calls the value of `data-init` as
// a function, and passes the element’s DOM node as an argument.

This is great, for several reasons:

– **Readability** – A cursory glance at the HTML markup tells us exactly which function affects it.
– **Portability** – This code can easily be moved or copied elsewhere without any concern about side-effects.
– **Abstraction** – We’ve minimized repetition, so changes are easier to make.

But those two functions still look awfully similar. It would be nice to combine them into something more generic…

## Putting data- to Work

Since our master selector callback passes the DOM node itself as an argument, our init functions can read in whatever attributes we want. That means we can do something like this:

function loadTheCode(theElement)
var ajaxUrl = $(theElement).data(‘url’);

// load some data from the specified URL
// display the data inside the container element

Now our markup would look like this:

This is getting awesomer. Later, when we decide we want a list of only certain tweets on a different page, all we have to do is add one line of markup:

We just made more AJAX happen without even touching our JavaScript!

## Infinite Possibilities

Using `data-` attributes, we can pass our loader function as many “arguments” as we want. How about `data-refresh` to specify an auto-refresh interval, or `data-cache` to control AJAX caching options? Together with a well-tuned REST API this method is virtually limitless.


Node.js & less on Media Temple Grid

Maybe I’m crazy, but I wanted to install Node.js in a (gs) shared hosting environment so that I could compile and save small changes to my LESS-based stylesheets via SSH without having to maintain a local working copy. That’s right. I said it. I change files in a live environment sometimes, and there’s nothing wrong with that! Anyone who opines to the contrary in the comments below will be swiftly dealt with.

This how-to is based on the great work of [Ian Tearle and his commenters](

### Preparation

Get your Media Temple Grid Service site number. If you don’t know your site number, see [this Media Temple support page]( For this tutorial, we’ll use the **123456** as an example.

First, let’s to prepare the shell environment to recognize executables in the new directories we’re going to create. Create or edit *~/.bash_profile* and add the following lines:

export PATH

Save the file and exit, then source *~/.bash_profile* to reflect the changes without logging out and back in:

$ . ~/.bash_profile

### Building Node

Now let’s [grab Node.js from GitHub](

$ git clone
$ cd node
$ mkdir -p /home/123456/data/opt
$ ./configure --prefix=/home/123456/data/opt/
$ make && make install

If all goes well (that make can take a while), you should now have a fully-functioning Node.js installation. Test it out by typing `node -v`. If you see a version number and not an error, you’re in business!

### But wait! There’s less!

Now it’s time to download and install **less**.

$ cd ~/data/
$ npm update
$ npm install less

This will install the **lessc** binary in the *~/data/node_modules/.bin* directory which we added to our $PATH. The installation may fail. If it does, just try running it again a few times until it works.

If all goes as well for you as it did for me, you should now be able to use **lessc** from anywhere within your jailed shell environment!


Getting .bashrc to work on MediaTemple’s Grid Service

I’m spoiled, guys and gals. I can’t work without [my dotfiles]( Watching me work in a vanilla bash shell is excruciating, like watching someone walk with those drunk-driving goggles–fumbling and stumbling through an environment completely devoid of the shortcuts and settings upon which I’ve come to rely so heavily. Even for something as simple as listing and switching directories:

njbair@n16 ~ $ ll
-bash: ll: command not found
njbair@n16 ~ $ ls -l
lrwxrwxrwx 1 njbair njbair 10 Sep 20 05:38 data -> ../../data/
lrwxrwxrwx 1 njbair njbair 13 Sep 20 05:38 domains -> ../../domains/
njbair@n16 ~ $ cd domains
njbair@n16 domains $ ll
-bash: ll: command not found
njbair@n16 domains $ ls -l
drwxr-xr-x 4 njbair www-data 5 Sep 20 05:38
lrwxrwxrwx 1 njbair njbair 6 Sep 20 05:38 ->
njbair@n16 $ ll
-bash: ll: command not found
njbair@n16 $ kill me now

Fortunately, `kill me now` was not installed on that machine.

I’ve been a long-time customer of MediaTemple’s dedicated hosting packages, but only recently set up my first **(gs)** shared hosting account. I’m really happy with the whole service so far. But after enabling SSH for my account, I hit a snag while installing my dotfiles: *.bashrc* wasn’t working. I could manually source the file, but it wasn’t loading upon login. Fortunately, the fix was pretty easy.

### The Fix

So, you’ve set up SSH access on a MediaTemple Grid Service account, but can’t get your *.bashrc* to load? Try this:

echo “if [ -f ~/.bashrc ]; then source ~/.bashrc; fi” >> ~/.bash_profile

Then logout and log back in.

### What just happened?

MediaTemple’s Grid SSH access doesn’t read *.bashrc* by default. This is because of political pressures relating to the high-stakes game of world diplomacy and international intrigue. Or maybe [there’s a reasonable technical explanation](

Hope this helps!


HOWTO: Fix “Keyword title is not registered” error in Joomla 3.0

If you’re working with Joomla using an HTML5 template and you try and validate your site using the [W3C Markup Validation Service](, you may find yourself hit with the following validation error:

*Line 7, Column 44*: **Bad value title for attribute name on element meta: Keyword title is not registered.**

  <meta name="title" content="Who We Are" />
Syntax of metadata name:
A metadata name listed in the HTML specification or listed in the WHATWG wiki. You can register metadata names on the WHATWG wiki yourself.

The **title** meta keyword is a carryover from older versions of Joomla (pre-HTML5), back when the HTML specification did not restrict meta keywords as they do now. While it’s true that the error notice provides instructions on how to register a metadata name yourself, there is a quick fix that gets your site to validate and avoids involvement in deliberations over an emerging spec, and it can be applied **without** modifying Joomla core files.

To fix the error, simply add the following line to the opening PHP code block in your template’s **index.php**, right before the closing tag:

// remove “title” metadata keyword because it breaks HTML5 validation
$doc->setMetaData(‘title’, FALSE);

This line simply unsets the keyword so that the meta tag will not be rendered by the template engine. It assumes that your template code has already set the `$doc` variable using `JFactory::getDocument()`. If your Document object is assigned to a different variable, use that instead.

Administration Development Software

Displaying PHP errors with Xdebug in Ubuntu Server

Ubuntu Server packages are generally pretty well-configured right out of the box–usually requiring little or no configuration for simple operation. It’s one of the reasons why, despite my preference toward Arch Linux for the desktop, I’ve long advocated Ubuntu as a great starting point for a LAMP development server. Yet, on occasion, a package ships with a configuration that needs some work in order to be useful. Xdebug is such a package.

Xdebug’s most immediately helpful feature is the display of stack traces for all PHP errors. (Actually, it does a whole lot more than that, but that’s beyond the scope of this post.) Stack traces appear in place of the ordinary PHP error notices, so they require that the PHP config option **display_errors** is enabled. But Ubuntu disables this option by default.

This is actually a sane default, because error notices may potentially expose security holes or other sensitive data, and thus should be suppressed in production environments. But installing Xdebug implies that the target is a development and/or testing environment (for a lot of reasons, not the least of which is Xdebug’s non-trivial processing overhead). So it makes sense that display_errors should be enabled.

This is a simple fix. Edit */etc/php5/apache2/php.ini*, locate the **display_errors** option, and change its value from **Off** to **On**. The final result should look like this:

display_errors = On

Alternatively, you can add that line to the end of the Xdebug config file located in */etc/php5/conf.d/*. This allows you to enable/disable the display of errors at the same time as you enable/disable the Xdebug module. (This can be done by invoking either of the provided scripts **php5enmod** and **php5dismod** and reloading Apache.)

I have filed [a bug report]( to notify the devs about this issue. I hope this post and an eventual bug fix will save other folks some frustration.

Development Software

File Access Bug in LAMP Virtual Machine

Using a VM as a web development test server is a great way to optimize workstation resources. My test VM is an Ubuntu installation with a standard Apache/MySQL/PHP stack. I use VirtualBox shared folders to grant the VM access to my development directory.

For some time I have been wrestling with an irritating bug that crops up when using shared folders with Apache: when a new file is created, and sometimes when existing files are modified, Apache fails to recognize the change. Any attempt to access the file via HTTP will result in a 404 error. Some searching led me to this VirtualBox bug report and the solution.

By default, Apache leverages the kernel’s *sendfile* mechanism to deliver resources to HTTP clients in order to optimize performance. But when a small file on a network share is altered, sometimes sendfile doesn’t bother to check its length or contents. Most of the time this is not an issue for production servers, which rarely use network storage and which don’t experience the frequent file changes of a development server. But in this case the default behavior needs to be changed.

To turn off sendfile in apache, add the following line to the Apache server configuration (i.e. **/etc/apache2.conf**):

EnableSendfile Off

Restart Apache to apply the new configuration.

###Related Links
* [vboxsf and small files](
* [Apache EnableSendfile Directive](


Wake-on-LAN Wrapper Script Revisited

Almost a year ago, I posted this article describing a simple, CLI-based way to trigger the Wake-on-LAN *magic packet* on your network to wake a sleeping machine. Trouble is, that script is showing its age, taking advantage of some deprecated utilities to perform its function. This updated script performs the same task using a more modern toolchain.


# find the IP address in the output of `nslookup`
ip=`nslookup imac | tail -2 | head -1 | awk ‘{print $2}’`

# find the MAC address in the output of `ip neigh`
mac=`ip neigh | grep -i -m1 $ip | awk ‘{print $5}’`

# send the magic packet
wol -h $ip $mac

The **ip** utility replaces **ifconfig**, **arp**, and many other utilities in most current Linux distributions.


MySQL: Copying Column Data Between Tables

Everyone who administrates databases, large or small, will eventually encounter the need to copy data from one table to another. This often occurs in application development when tweaking data model designs, copying data stored in an existing table into a newly-created table. In such cases it is often also necessary to update the existing table with the row ID of the associated data in the new table. Consider the following example:

Joe maintains a database for customer information. His database includes a table for his customers’ personal information (named *Personal*) as well as a table for their business information (named *Business*). Both tables include many similar columns (*street_address*, *city*, *state*, *zip*), and Joe realizes that it is better to contain this info in a new table called `Addresses`. He decides to create a new column in both the *Personal* and *Business* tables which will store the ID of the corresponding row in the new *Addresses* table.

Now, Joe has to figure out how to copy the data from the existing tables into the new table. Being an application developer used to control flow structures, he conceives a loop that parses each row, copies the data from A to B, then updates the old row with the last insert ID. He sets out to find the syntax to perform loops in MySQL, but he comes away confused and dejected. That’s because although MySQL provides loop functionality through cursors, Joe has learned that cursors are inefficient, difficult to implement, and inflexible. Joe decides that there must be a better way to move his data.

Joe calls his database admin friend, Pete, and explains what he wants to do. After spending five minutes nitpicking the semantics of Joe’s explanation, Pete offers a clever solution:

> “Just copy the existing row ID into a throwaway column in the new table, then perform a multi-table update to update the new ID field in the old table. Pfft…developers.”

A good developer, fluent in the best practices of procedural programming, would consider this approach to be a hack. But in the world of relational databases, it’s how to get things done. Here’s the code:

# create a new temporary column to store the existing row ID
ALTER TABLE Addresses ADD old_id INT(11);

# copy the data
INSERT INTO Addresses (street_address, city, state, zip)
SELECT street_address, city, state, zip FROM Personal;

# match the rows using the ID stored in the temporary column
# then update the new_id field in the old table
UPDATE Addresses, Personal
SET Personal.new_id =
WHERE = Addresses.old_id;

# delete the temporary column
ALTER TABLE Addresses DROP old_id;

To copy rows from both existing tables at once, add a UNION clause:

# copy the data
INSERT INTO Addresses (street_address, city, state, zip)
SELECT street_address, city, state, zip FROM Personal;
SELECT street_address, city, state, zip FROM Business;

Obviously, these queries may be adapted to include transformations, joins, or any other necessary operations.

So, the big lesson that we learned along with Joe today is that when application developers get their hands dirty in the database, it often requires a retooling of the way we think, not just a syntax reference.


Simple Wake-on-LAN Wrapper Script

Our home network includes a number of Linux-based systems and one Apple iMac. Because the Mac hosts all of our family photos, and because printing from a Mac to a Linux printer is a headache, it made the most sense for us to connect the printer directly to the Mac. However, this creates an issue when printing from one of the other computers: if the Mac is asleep, the shared printer does not respond to requests.

The workaround is simple: get up, walk over to the Mac, and hit the space bar or click the mouse to wake up the machine before trying to print. But we don’t want to get up, so we’ll issue a Wake-on-LAN packet to the Mac instead, using **wakeonlan**.

**wakeonlan** is a simple Perl script written by Jose Pedro Oliveira and maintained by Ico Doornekamp. It is available from the repositories for most Linux distributions. The script broadcasts “magic packets” over the network that are read by WOL-compatible interfaces. If the interface recognizes its own MAC address in the magic packet, the interface signals the host machine to “wake up.”

Sure, it’s a neat trick, but we have to know the destination machine’s MAC address. So we can either make MAC address flash cards for our LAN, or we can write a wrapper script to find the MAC address for us and issue the packet.

### Finding the MAC Address ###

To find the MAC address we can use **arp**, a standard Linux utility, to read the local host’s ARP cache. The ARP cache is an important part of any Ethernet network–it stores a table of every known host on the network by IP and MAC address. Essentially, it serves as the local host’s “address book,” making sure requests to a given IP address are sent to the appropriate network interface. Most of the time, this translation is done in the background while the user remains happily oblivious. But Wake-on-LAN packets are special, and they need our help to get to where they’re going.

The output of **arp** looks something like this:

$ arp
Address HWtype HWaddress Flags Mask Iface
server ether ab:cd:ef:01:23:45 C eth0
bobs-machine ether 11:22:33:44:55:66 C eth0
arpeggio ether 56:45:34:23:12:01 C eth0
My-iMac.domain ether c8:2a:14:ff:ff:ff C eth0
My-iMac.local ether e4:ce:8f:00:00:00 C eth0

We’re interested in the third column–the MAC address–of the machine called `My-iMac`. Because the iMac’s Ethernet and Wi-Fi interfaces are both enabled, the same hostname shows up twice (albeit with different domains). We’re only interested in the Ethernet interface, which, fortunately, shows up first (the output of **arp** does not tell us which is which–in my case, I know this because I’ve set up static DHCP leases on my home network–finding this out may require some digging or trial and error). This means that we can write our script to search for the first line matching the hostname, then to extract the third column from that line. We accomplish this using **grep** and **awk**, respectively.

$ arp | grep -i -m1 imac
My-iMac.domain ether c8:2a:14:ff:ff:ff C eth0
$ arp | grep -i -m1 imac | awk ‘{print $3}’

For this discussion we’ll bypass a detailed explanation of the **grep** and **awk** commands. See the Linux man pages for details.

Now that we have a command line string that finds the right MAC address, we can write our script.

### The Script ###

Below is our Wake-on-LAN wrapper script, saved in a file named **wol**:


MACADDR=`arp | grep -i -m1 $1 | awk ‘{print $3}’`
wakeonlan $MACADDR

All we’ve done is taken our command line string and replaced the hostname search string (`imac`) with the variable `$1`, which is interpreted by the shell as the first argument passed to the script by the user. This way we can use the same script to send WOL packets to any machine on the network. Executing the script looks like this:

$ ./wol imac

There. Wasn’t that easier than getting up and hitting the space bar?

### Further Reading ###
* [Wake-on-LAN at Wikipedia](
* [ARP cache: What is it and how can it help you?](