Node.js & less on Media Temple Grid

Maybe I’m crazy, but I wanted to install Node.js in a (gs) shared hosting environment so that I could compile and save small changes to my LESS-based stylesheets via SSH without having to maintain a local working copy. That’s right. I said it. I change files in a live environment sometimes, and there’s nothing wrong with that! Anyone who opines to the contrary in the comments below will be swiftly dealt with.

This how-to is based on the great work of [Ian Tearle and his commenters](

### Preparation

Get your Media Temple Grid Service site number. If you don’t know your site number, see [this Media Temple support page]( For this tutorial, we’ll use the **123456** as an example.

First, let’s to prepare the shell environment to recognize executables in the new directories we’re going to create. Create or edit *~/.bash_profile* and add the following lines:

export PATH

Save the file and exit, then source *~/.bash_profile* to reflect the changes without logging out and back in:

$ . ~/.bash_profile

### Building Node

Now let’s [grab Node.js from GitHub](

$ git clone
$ cd node
$ mkdir -p /home/123456/data/opt
$ ./configure --prefix=/home/123456/data/opt/
$ make && make install

If all goes well (that make can take a while), you should now have a fully-functioning Node.js installation. Test it out by typing `node -v`. If you see a version number and not an error, you’re in business!

### But wait! There’s less!

Now it’s time to download and install **less**.

$ cd ~/data/
$ npm update
$ npm install less

This will install the **lessc** binary in the *~/data/node_modules/.bin* directory which we added to our $PATH. The installation may fail. If it does, just try running it again a few times until it works.

If all goes as well for you as it did for me, you should now be able to use **lessc** from anywhere within your jailed shell environment!


Getting .bashrc to work on MediaTemple’s Grid Service

I’m spoiled, guys and gals. I can’t work without [my dotfiles]( Watching me work in a vanilla bash shell is excruciating, like watching someone walk with those drunk-driving goggles–fumbling and stumbling through an environment completely devoid of the shortcuts and settings upon which I’ve come to rely so heavily. Even for something as simple as listing and switching directories:

njbair@n16 ~ $ ll
-bash: ll: command not found
njbair@n16 ~ $ ls -l
lrwxrwxrwx 1 njbair njbair 10 Sep 20 05:38 data -> ../../data/
lrwxrwxrwx 1 njbair njbair 13 Sep 20 05:38 domains -> ../../domains/
njbair@n16 ~ $ cd domains
njbair@n16 domains $ ll
-bash: ll: command not found
njbair@n16 domains $ ls -l
drwxr-xr-x 4 njbair www-data 5 Sep 20 05:38
lrwxrwxrwx 1 njbair njbair 6 Sep 20 05:38 ->
njbair@n16 $ ll
-bash: ll: command not found
njbair@n16 $ kill me now

Fortunately, `kill me now` was not installed on that machine.

I’ve been a long-time customer of MediaTemple’s dedicated hosting packages, but only recently set up my first **(gs)** shared hosting account. I’m really happy with the whole service so far. But after enabling SSH for my account, I hit a snag while installing my dotfiles: *.bashrc* wasn’t working. I could manually source the file, but it wasn’t loading upon login. Fortunately, the fix was pretty easy.

### The Fix

So, you’ve set up SSH access on a MediaTemple Grid Service account, but can’t get your *.bashrc* to load? Try this:

echo “if [ -f ~/.bashrc ]; then source ~/.bashrc; fi” >> ~/.bash_profile

Then logout and log back in.

### What just happened?

MediaTemple’s Grid SSH access doesn’t read *.bashrc* by default. This is because of political pressures relating to the high-stakes game of world diplomacy and international intrigue. Or maybe [there’s a reasonable technical explanation](

Hope this helps!

Administration Development Software

Displaying PHP errors with Xdebug in Ubuntu Server

Ubuntu Server packages are generally pretty well-configured right out of the box–usually requiring little or no configuration for simple operation. It’s one of the reasons why, despite my preference toward Arch Linux for the desktop, I’ve long advocated Ubuntu as a great starting point for a LAMP development server. Yet, on occasion, a package ships with a configuration that needs some work in order to be useful. Xdebug is such a package.

Xdebug’s most immediately helpful feature is the display of stack traces for all PHP errors. (Actually, it does a whole lot more than that, but that’s beyond the scope of this post.) Stack traces appear in place of the ordinary PHP error notices, so they require that the PHP config option **display_errors** is enabled. But Ubuntu disables this option by default.

This is actually a sane default, because error notices may potentially expose security holes or other sensitive data, and thus should be suppressed in production environments. But installing Xdebug implies that the target is a development and/or testing environment (for a lot of reasons, not the least of which is Xdebug’s non-trivial processing overhead). So it makes sense that display_errors should be enabled.

This is a simple fix. Edit */etc/php5/apache2/php.ini*, locate the **display_errors** option, and change its value from **Off** to **On**. The final result should look like this:

display_errors = On

Alternatively, you can add that line to the end of the Xdebug config file located in */etc/php5/conf.d/*. This allows you to enable/disable the display of errors at the same time as you enable/disable the Xdebug module. (This can be done by invoking either of the provided scripts **php5enmod** and **php5dismod** and reloading Apache.)

I have filed [a bug report]( to notify the devs about this issue. I hope this post and an eventual bug fix will save other folks some frustration.


Apache and SSL – The Easy Way

It’s no secret–SSL is confusing. Creating and signing certificates is a convoluted process, especially from the command line. Fortunately, Debian-based systems have an easy way for Apache users to create, sign, and install their own SSL certs. This tutorial assumes that Apache is already installed with the default configuration.

### Configure SSL ###

Step one is to configure Apache to enable `mod_ssl`:

# a2enmod ssl
Enabling module ssl.
See /usr/share/doc/apache2.2-common/README.Debian.gz on how to configure SSL and create self-signed certificates.
Run ‘/etc/init.d/apache2 restart’ to activate new configuration!

The documentation referred to by that script’s output explains that, on Debian systems, an SSL certificate is installed automatically when the `ssl-cert` package is installed. It also outlines the process of creating a new certificate (useful when usen name-based virtual hosts). From the manual:

> If you install the ssl-cert package, a self-signed certificate will be
> automatically created using the hostname currently configured on your
> computer. You can recreate that certificate (e.g. after you have
> changed /etc/hosts or DNS to give the correct hostname) as user root
> with:
> make-ssl-cert generate-default-snakeoil –force-overwrite
> To create more certificates with different host names, you can use
> make-ssl-cert /usr/share/ssl-cert/ssleay.cnf /path/to/cert-file.crt
> This will ask you for the hostname and place both SSL key and
> certificate in the file /path/to/cert-file.crt . Use this file with the
> SSLCertificateFile directive in the Apache config (you don’t need the
> SSLCertificateKeyFile in this case as it also contains the key). The
> file /path/to/cert-file.crt should only be readable by root. A good
> directory to use for the additional certificates/keys is
> /etc/ssl/private .

So, let’s create a new virtual host–one which can only be accessed via SSL.

Use the syntax from the manual to create a new certificate:

# make-ssl-cert /usr/share/ssl-cert/ssleay.cnf /etc/ssl/private/securehost.crt

When prompted, enter a hostname for the virtual host. For this example, the hostname is `securehost`.

### Create the Virtual Host ###

Once SSL is configured, it’s time to create the virtual host which will be accessed via SSL. Create the file */etc/apache2/sites-available/securehost* containing the following:

ServerName securehost

SSLEngine on
SSLCertificateFile /etc/ssl/private/securehost.crt

The above example assumes knowledge on how to configure a virtual host. Much of the necessary configuration has been omitted and replaced with an ellipsis.

It may be desirable to redirect port 80 traffic addressed to this hostname so that visitors do not have to explicitly designate the `https://` protocol. To do this, rename the virtual host file you just created to *securehost-ssl*, then create a new file called *securehost* containing the following:

Redirect / https://phpmyadmin/

### Configure Apache ###

All that’s left is to configure apache to recognize the new virtual hosts. The first step is to enable them:

# a2ensite securehost securehost-ssl
Enabling site securehost.
Enabling site securehost-ssl.
Run ‘/etc/init.d/apache2 reload’ to activate new configuration!

Before reloading Apache, it may be necessary to enable name-based virtual hosting for ports 80 and 443. In Debian, this is done in the file */etc/apache2/ports.conf*. Look for the lines that say `Listen 80` and `Listen 443`. Add the following options:

NameVirtualHost *:80
Listen 80

NameVirtualHost *:443
Listen 443

Finally, reload Apache to reflect the new configuration:

# service apache2 reload

### Try It ###

Now try navigating to https://securehost. This will most likely result in a certificate warning, which can be ignored/bypassed. Assuming that the document root contains an index page, it should be displayed here.

Once successful, try removing the **s** in the protocol to test the redirection from port 80. If redirection is working properly, the location bar in the browser should update to include the **s** again and the index page should again be displayed.

### Other Tips ###

It is possible for the same hostname to serve completely different sites based on the port specified. For example, a corporate site may display generic marketing info for visitors to port 80, but employees know that they need to use `https://` to access the login area. This keeps visitors from being presented with irrelevant employee prompts, and ensures that all employees are logging in securely.


Shred Empty Drive Space

If you are familiar with the **shred** command, you know it is an easy way to make sure sensitive data is *really* deleted. Shred overwrites a file with random data before deleting it, so that the original data cannot be recovered. Shred works by overwriting the data *in place*, or over top of the original file. But what if the file has already been deleted?

One way to destroy the data is to overwrite all unused space on the drive (or partition) with random data. The simplest way to do this is to invoke the **dd** command to create a new file full of random data:

dd if=/dev/urandom of=somefile.tmp bs=1024

The above command will run until the drive (or partition) runs out of space, writing random bits to a file called *somefile.tmp*, 1 kilobyte at a time.

Depending on the amount of free space, this could run for a long time. Also, depending on how you’ve configured your partitions and mount points, it may cause stability issues as the drive approaches full capacity. If you plan on running this command and walking away, you may want to append a command to remove the file when finished, to prevent crashes or errors due to low storage space:

dd if=/dev/urandom of=somefile.tmp bs=1024; rm somefile.tmp

If you’re short on time and willing to settle for a less secure method, you can replace */dev/urandom* with */dev/zero*, which should read in data much more quickly:

dd if=/dev/zero of=somefile.tmp bs=1024

This method can help keep your data secure, but it should be used sparingly. Writing to an entire drive is hard on the media, particularly flash-based media such as USB drives and SSDs, which have a limited number of write cycles. Use this method with care. If you have a lot of sensitive data, you may want to consider encrypting your files before writing to disk. But that’s a topic for another post.


SSH From the Inside

### Problem ###

I need SSH access to a particulr machine (*schoolsvr*) which is behind a NAT. I only need to enable access from a single client (*homesvr*), which has a public IP address of its own. Both machines are running **sshd**. I can access *homesvr* from a shell on *schoolsvr*, but not vise-versa.

If I had admin access on *schoolsvr’s* gateway, I could alter the NAT to forward some unused port (say, 12345) to *schoolsvr:22*, which would allow me to SSH to *schoolsvr* using the gateway’s public IP and port 12345. Unfortunately, I don’t have admin access to the gateway.

How do I enable SSH access to *schoolsvr*?

### Solution ###

The solution is to open an SSH tunnel from *schoolsvr*, which I can access from a shell on *homesvr*. To achieve this, I use the OpenSSH client program’s `-R` option to bind an SSH tunnel to a non-standard port on *homesvr*. Consider the following command:

nick@schoolsvr$ ssh -R 12345:localhost:22 nick@homesvr
nick@homesvr’s password:

This command connects to *homesvr* via the standard SSH port (22) and binds that connection to the specified bind port (12345). This port remains bound until the SSH session is terminated. Now all SSH traffic directed to port 12345 on *homesvr* will be forwarded to port 22. When I get back to *homesvr*, I can open a new SSH session with *schoolsvr* using the following command:

nick@homesvr$ ssh -p 12345 localhost
nick@localhost’s password:

I’m in! I can terminate this session when I am finished, and the original tunnel remains open until I kill it on *schoolsvr*.

This command can be set up in */etc/inittab* (or an Upstart config file, depending on your system configuration) with the `respawn` action, which would ensure that the tunnel is open upon boot and will be automatically reopened upon termination. Note that such a setup requires the appropriate SSH keys to be configured on both machines, as an init process can’t enter a password.

Because each half of the connection is done using SSH, this setup is completely secure. Of course, anyone with physical access to *schoolsvr* would have full control over the open login to *homesvr*. To prevent this, I can modify the original command as follows:

nick@schoolsvr$ ssh -nNT -R 12345:localhost:22 nick@homesvr &

The `-n` option redirects standard input from */dev/null*. The `-N` option is specifically designed for port-forwarding applications such as this, and tells SSH not to bother preparing a command stream for this connection. The `-T` option tells the remote host not to bother allocating a pseudo-tty for this connection. These three options eliminate the possibility of using this open tunnel to execute any other processes on *schoolsvr*. Additionally, I appended an ampersand (`&`) to send the process to the background. Now I can close the shell in which I ran the command without killing the process.

### Conclusion ###

While not as elegant as a true NAT-based port forwarding solution, reverse SSH tunnels are a fast, secure way to connect two remote machines for general use. When used with discretion, they can be a real time-saver.

What do you think of this solution? Did I leave anything out? Let me know in the comments.


A Good SSD/HDD Partitioning Scheme

An SSD is a great investment. Data loads super fast and there are no moving parts to fail. But SSD storage space is expensive, and most users have a lot to store. A common solution is to install the OS to the SSD, and move personal data (the `/home` directory) to a secondary HDD. While this is the easiest way to take advantage of SSD speed, the results are less than ideal. SSDs have a limit on write cycles so it is wise to minimize disk write operations. My preferred solution offloads some of the more volatile areas of the Linux filesystem to the HDD as well.

The table below shows how I partitioned my 32GB SSD (`/dev/sda`) and my 320GB HDD (`/dev/sdb`):

Partition Size Type Mount Point
/dev/sda1 512M ext4 /boot
/dev/sda3 23.5G ext4 /
/dev/sda5 8G ext4 /usr

/dev/sdb1 4G ext4 /var
/dev/sdb2 4G swap
/dev/sdb5 192G ext4 /home
(120GB Unpartitioned space on /dev/sdb)

This scheme takes advantage of the SSD’s speed in areas that matter, while making sure to maximize its lifespan by minimizing write cycles. Below we’ll take a look at each partition in more detail.

### The `/boot` Directory ###

A long-standing convention states that `/boot` should be its own small partition at the front of the disk. This goes back to old BIOS limitations which no longer apply. Nevertheless, I prefer to maintain this convention because it’s familiar and logical. Some Linux admins keep `/boot` on its own partition so that it can remain unmounted by the running system for security reasons. This is possible because the files in `/boot`, while accessed by the bootloader, are generally ignored by the kernel and other processes. If you do this, it is good to leave a `/boot` entry in `/etc/fstab` so that the partition can be easily mounted for such tasks as bootloader configuration and kernel image updates, but append the `noauto` option to prevent the kernel from automounting the partition on boot.

**Note:** Before you decide to leave your `/boot` directory unmounted, consider the following. Many package managers include kernel updates as part of their normal update process. If your package manager does this, it will likely write the updated kernel image to the empty placeholder `/boot` directory on the root partition, but GRUB will still try to read the kernel image from the `/boot` partition (remember that GRUB uses its own syntax to refer to filesystems), and thus will fail to find the appropriate kernel image. Don’t do this unless you know what you are getting into, or if your distribution is such that you build and install your own kernel images rather than relying on a package manager to do it.

It is worth pointing out that only very recent versions of GRUB support the ext4 filesystem, so you may want to use ext2/3 here instead.

### The `/` (Root) Directory ###

It seems obvious that the root directory should be on the SSD–it is the operating system. If this directory were moved off the SSD, the upgrade would be rather pointless. Locating the root partition on the SSD will result in fast booting and loading of programs. Moving on…

### The `/usr` Directory ###

The `/usr` directory stores most of the binaries and global program files on a Linux system. I prefer to mount `/usr` as its own partition. Other than `/home` and `/var` (as well as `/srv` on some machines), it is the only directory of substantial size in the Linux filesystem. Partitioning it off can help make the process of imaging disks for backup more efficient (i.e. an admin may not want to back up `/usr` as often as the rest of the filesystem–it only changes when packages are upgraded).

### The `/var` Directory ###

The `/var` directory is probably the most volatile directory on any given Linux machine. That’s because it’s where log files are stored. Moving this directory to a standard disk drive can save a significant number of writes to the SSD. While it is true that the writing of log data will take longer this way, the Linux kernel uses advanced I/O caching (writing files to RAM until the CPU has free time to write them to disk), so there is no noticeable decrease in performance.

### Swap Space ###

It may be tempting to move swap space to the SSD, since it would make swap operations much faster. But SSDs are about gaining high performance, not making up for bad performance. If a machine has so little memory that the OS is forced to use swap space frequently, the admin is better served to spend his upgrade dollars on more RAM before springing for an SSD. For this reason, and because those 4GB of SSD space are precious, it is wise to put swap space on the HDD.

### The `/home` Directory ###

The `/home` directory is the conventional location for the storage of documents, photos, music, videos, and other personal files. It is also the location of user-based configuration settings. When a user launches an application, that user’s personal settings are loaded from the `/home` directory. This leads some to suggest that putting `/home` on the SSD will speed up load times. Whereas this may be the case, most of the settings are stored in simple, tiny, text-based config files, and any difference in program load times would be completely indistinguishable by the user. Rather than complicate matters by partitioning subdirectories of `/home`, it is more sensible to create a separate `/home` partition on the HDD and be done with it.

### Other Considerations ###

The scheme described above is limited in scope. Below are some other things to consider when designing a partitioning scheme.

#### `/srv` ####

Servers often store content in the `/srv` directory instead of `/var`. While `/srv` is a rather new convention to the Linux FS hierarchy, it is becoming quite popular. If your distribution uses `/srv`, it is probably a good candidate for partitioning off to the HDD.

#### `/lib` ####

Shared libraries, stored in `/lib`, are often accessed by programs on load, and thus should remain on the SSD. But because the `/lib` directory can become modestly large (although usually much smaller than `/usr` or `/var`), some admins prefer to create a separate `/lib` partition on the SSD for much the same reason as `/usr`–that is, to make backups more efficient. There is nothing wrong with this decision. It is simply a matter of preference.

#### Unpartitioned Space ####
Many admins prefer to reserve unpartitioned space on a disk in order to accommodate unforeseen circumstances. As long as the existing partitions have plenty of room for growth, this can be a wise decision. A good rule of thumb is to make each partition twice as big as you think you need, and to leave the remaining space unpartitioned.

### Conclusion ###

While no single partitioning scheme will suit every machine, the scheme described above is a good starting point. What would you do differently? Let me know in the comments.


Linux Server: to Reboot or Not to Reboot?

Linux servers have a reputation as workhorses. Since very early in the
development of Linux, its users have boasted in the stability of the
OS. In fact, it is not uncommon to hear of Linux-based servers running
for years without the need for a reboot. This raises the question: how
often should you reboot your Linux server?

Months and months of server uptime can be a good thing (and for some,
even cause for boasting), but is it wise to go such a long time without
rebooting? I would strongly argue that it is not. In fact, a wise
server recovery/contingency plan will include reboots as part of a
regular maintenance schedule. Below I outline some reasons why you
should reboot your server on a regular basis.

### Kernel Upgrades ###

The Linux kernel is under constant development. New drivers are always
being written, old ones are rewritten, bugs are patched, and security
holes are plugged. These upgrades generally result in a system that is
faster, safer, and more reliable. Package managers upgrade the kernel
regularly in most distributions. But even if your distribution doesn’t
automatically upgrade your kernel, for the aforementioned reasons you
should make it a point to do so periodically.

In order for the upgraded kernel to run, the system needs to be
rebooted. Some distros notify the user when a reboot is required, but
it is ultimately the responsibility of the sysadmin to know what
software is being upgraded and what actions those upgrades require.

### Real-World Reliability Testing ###

Any sysadmin who has been at it for a while has experienced this

Something happens that causes the server to shut down–perhaps a
hardware addition/replacement, power loss, or the need to move the
machine. Once the interruption is over, the admin boots the server only
to find that things aren’t working as they should. Some critical
service failed to start properly. What happened? As software packages
are updated and new versions are released, many variables come into
play that affect normal operation of that software. A configuration
setting might become deprecated. A hack that was used to fix a bug in
an old version, may render the new version useless. The list goes on.

As the time between reboots increases, so does the likelihood that some
service will not initialize properly. These errors take time to
diagnose and correct, which translates to unacceptable server downtime.
This problem is compounded when two or three issues occur on a single
reboot. Rebooting on a regular schedule allows the sysadmin to catch
these types of errors quickly. It also provides time to correct the
errors without workflow grinding to a halt, as users are informed ahead
of time that the server will be down for maintenance.

While it is true that services can be restarted individually, nothing
can accurately simulate a full reboot. And the longer you wait between
reboots, the greater the chance of something going wrong. Remember:
*You will never experience a routine reboot until you implement a
reboot routine.*