Let’s Make Wildcard Certificates with Certbot, Docker, and Route53

In case you haven’t heard, Let’s Encrypt now supports wildcard certificates as a feature of the new ACME v2 protocol.¬†However, current client support is still somewhat limited, as the Let’s Encrypt CA requires domain validation via DNS-01 challenge. To further complicate things, DNS-01 requires programmatic access to your nameservers. But let’s assume you are already using Route53 and you’re looking for the simplest way to begin issuing wildcard certificates for your hosted zones. You’ll need an up-to-date ACME client, such as the latest version of Certbot. Chances are your server distro is not that bleeding-edge. That’s where Docker comes in.

Let’s take a look at how to quickly set up a Docker container for Certbot to issue wildcard certificates via Let’s Encrypt.

What You’ll Need

You’ll need a few things to get started:

  • A domain name set up to use Amazon Route53 nameservers.
  • A set of AWS credentials configured with the appropriate Route53 permissions (details below).
  • A functioning Docker instance on your web server.

Create the Docker Script

Let’s start by creating a working directory for our Docker image:

$ mkdir ~/certbot-docker
$ cd ~/certbot-docker

Now throw the following code into a file named ~/certbot-docker/


sudo docker run -it --rm --name certbot \
--env AWS_CONFIG_FILE=/etc/aws-config \
-v "${PWD}/aws-config:/etc/aws-config" \
-v "/etc/letsencrypt:/etc/letsencrypt" \
-v "/var/lib/letsencrypt:/var/lib/letsencrypt" \
certbot/dns-route53 certonly --server

Notice the part where it mentions the file ${PWD}/aws-config. Next we need to create that file using your AWS API credentials.

Provide Your AWS API Credentials

The ~/certbot-docker/aws-config file should look like this:


If you haven’t set up the API credentials yet, login to the AWS IAM console and create a group with the following policy:

  "Version": "2012-10-17",
  "Statement": [
      "Sid": "VisualEditor0",
      "Effect": "Allow",
      "Action": [
      "Resource": "*"
      "Sid": "VisualEditor1",
      "Effect": "Allow",
      "Action": "route53:ChangeResourceRecordSets",
      "Resource": "arn:aws:route53:::hostedzone/*"

Special thanks to the author of this forum post for pointing me in the right direction.


Wake-on-LAN Wrapper Script Revisited

Almost a year ago, I posted this article describing a simple, CLI-based way to trigger the Wake-on-LAN *magic packet* on your network to wake a sleeping machine. Trouble is, that script is showing its age, taking advantage of some deprecated utilities to perform its function. This updated script performs the same task using a more modern toolchain.


# find the IP address in the output of `nslookup`
ip=`nslookup imac | tail -2 | head -1 | awk ‘{print $2}’`

# find the MAC address in the output of `ip neigh`
mac=`ip neigh | grep -i -m1 $ip | awk ‘{print $5}’`

# send the magic packet
wol -h $ip $mac

The **ip** utility replaces **ifconfig**, **arp**, and many other utilities in most current Linux distributions.


MySQL: Copying Column Data Between Tables

Everyone who administrates databases, large or small, will eventually encounter the need to copy data from one table to another. This often occurs in application development when tweaking data model designs, copying data stored in an existing table into a newly-created table. In such cases it is often also necessary to update the existing table with the row ID of the associated data in the new table. Consider the following example:

Joe maintains a database for customer information. His database includes a table for his customers’ personal information (named *Personal*) as well as a table for their business information (named *Business*). Both tables include many similar columns (*street_address*, *city*, *state*, *zip*), and Joe realizes that it is better to contain this info in a new table called `Addresses`. He decides to create a new column in both the *Personal* and *Business* tables which will store the ID of the corresponding row in the new *Addresses* table.

Now, Joe has to figure out how to copy the data from the existing tables into the new table. Being an application developer used to control flow structures, he conceives a loop that parses each row, copies the data from A to B, then updates the old row with the last insert ID. He sets out to find the syntax to perform loops in MySQL, but he comes away confused and dejected. That’s because although MySQL provides loop functionality through cursors, Joe has learned that cursors are inefficient, difficult to implement, and inflexible. Joe decides that there must be a better way to move his data.

Joe calls his database admin friend, Pete, and explains what he wants to do. After spending five minutes nitpicking the semantics of Joe’s explanation, Pete offers a clever solution:

> “Just copy the existing row ID into a throwaway column in the new table, then perform a multi-table update to update the new ID field in the old table. Pfft…developers.”

A good developer, fluent in the best practices of procedural programming, would consider this approach to be a hack. But in the world of relational databases, it’s how to get things done. Here’s the code:

# create a new temporary column to store the existing row ID
ALTER TABLE Addresses ADD old_id INT(11);

# copy the data
INSERT INTO Addresses (street_address, city, state, zip)
SELECT street_address, city, state, zip FROM Personal;

# match the rows using the ID stored in the temporary column
# then update the new_id field in the old table
UPDATE Addresses, Personal
SET Personal.new_id =
WHERE = Addresses.old_id;

# delete the temporary column
ALTER TABLE Addresses DROP old_id;

To copy rows from both existing tables at once, add a UNION clause:

# copy the data
INSERT INTO Addresses (street_address, city, state, zip)
SELECT street_address, city, state, zip FROM Personal;
SELECT street_address, city, state, zip FROM Business;

Obviously, these queries may be adapted to include transformations, joins, or any other necessary operations.

So, the big lesson that we learned along with Joe today is that when application developers get their hands dirty in the database, it often requires a retooling of the way we think, not just a syntax reference.