MP3 Encoding Support in Ubuntu 11.04

Today I went to copy my music library from Banshee to an old iPod mini. I found out the hard way that the Ubuntu 11.04 upgrade tool uninstalls the necessary programs for MP3 encoding support, causing Banshee to fill my iPod with WAV files.

This is most likely a [philosophical decision](http://www.ubuntu.com/project/about-ubuntu/our-philosophy “Our philosophy | Ubuntu”) stemming from the ongoing debate between [open-source pragmatists and free software idealists](http://en.wikipedia.org/wiki/Free_software_movement#Criticism_and_controversy “Free software movement – Wikipedia”). We won’t get into that here, but assuming you *do* want MP3 support, it’s a simple fix:

# apt-get install ubuntu-restricted-extras

Once this package is istalled, restart Banshee and it should automatically encode to MP3 without changing any settings.

Don’t get me wrong–I admire the idealists, but I also want to fit more than 20 songs on my iPod.

Apache and SSL – The Easy Way

It’s no secret–SSL is confusing. Creating and signing certificates is a convoluted process, especially from the command line. Fortunately, Debian-based systems have an easy way for Apache users to create, sign, and install their own SSL certs. This tutorial assumes that Apache is already installed with the default configuration.

### Configure SSL ###

Step one is to configure Apache to enable `mod_ssl`:

# a2enmod ssl
Enabling module ssl.
See /usr/share/doc/apache2.2-common/README.Debian.gz on how to configure SSL and create self-signed certificates.
Run ‘/etc/init.d/apache2 restart’ to activate new configuration!

The documentation referred to by that script’s output explains that, on Debian systems, an SSL certificate is installed automatically when the `ssl-cert` package is installed. It also outlines the process of creating a new certificate (useful when usen name-based virtual hosts). From the manual:

> If you install the ssl-cert package, a self-signed certificate will be
> automatically created using the hostname currently configured on your
> computer. You can recreate that certificate (e.g. after you have
> changed /etc/hosts or DNS to give the correct hostname) as user root
> with:
>
> make-ssl-cert generate-default-snakeoil –force-overwrite
>
> To create more certificates with different host names, you can use
>
> make-ssl-cert /usr/share/ssl-cert/ssleay.cnf /path/to/cert-file.crt
>
> This will ask you for the hostname and place both SSL key and
> certificate in the file /path/to/cert-file.crt . Use this file with the
> SSLCertificateFile directive in the Apache config (you don’t need the
> SSLCertificateKeyFile in this case as it also contains the key). The
> file /path/to/cert-file.crt should only be readable by root. A good
> directory to use for the additional certificates/keys is
> /etc/ssl/private .

So, let’s create a new virtual host–one which can only be accessed via SSL.

Use the syntax from the manual to create a new certificate:

# make-ssl-cert /usr/share/ssl-cert/ssleay.cnf /etc/ssl/private/securehost.crt

When prompted, enter a hostname for the virtual host. For this example, the hostname is `securehost`.

### Create the Virtual Host ###

Once SSL is configured, it’s time to create the virtual host which will be accessed via SSL. Create the file */etc/apache2/sites-available/securehost* containing the following:



ServerName securehost

SSLEngine on
SSLCertificateFile /etc/ssl/private/securehost.crt


The above example assumes knowledge on how to configure a virtual host. Much of the necessary configuration has been omitted and replaced with an ellipsis.

It may be desirable to redirect port 80 traffic addressed to this hostname so that visitors do not have to explicitly designate the `https://` protocol. To do this, rename the virtual host file you just created to *securehost-ssl*, then create a new file called *securehost* containing the following:


Redirect / https://phpmyadmin/

### Configure Apache ###

All that’s left is to configure apache to recognize the new virtual hosts. The first step is to enable them:

# a2ensite securehost securehost-ssl
Enabling site securehost.
Enabling site securehost-ssl.
Run ‘/etc/init.d/apache2 reload’ to activate new configuration!

Before reloading Apache, it may be necessary to enable name-based virtual hosting for ports 80 and 443. In Debian, this is done in the file */etc/apache2/ports.conf*. Look for the lines that say `Listen 80` and `Listen 443`. Add the following options:

NameVirtualHost *:80
Listen 80


NameVirtualHost *:443
Listen 443

Finally, reload Apache to reflect the new configuration:

# service apache2 reload

### Try It ###

Now try navigating to https://securehost. This will most likely result in a certificate warning, which can be ignored/bypassed. Assuming that the document root contains an index page, it should be displayed here.

Once successful, try removing the **s** in the protocol to test the redirection from port 80. If redirection is working properly, the location bar in the browser should update to include the **s** again and the index page should again be displayed.

### Other Tips ###

It is possible for the same hostname to serve completely different sites based on the port specified. For example, a corporate site may display generic marketing info for visitors to port 80, but employees know that they need to use `https://` to access the login area. This keeps visitors from being presented with irrelevant employee prompts, and ensures that all employees are logging in securely.

Command Line DVD Ripping

I recently took on the task of digitizing our old VHS tapes. While
digging through the tapes I came across an old, region-2-encoded DVD
copy of *Rocketman*, starring Harland Williams. I bought it years
ago before it was available on region 1 DVD. In those days I had a DVD
drive which I kept set to region 2, but no longer. It’s time to get rid
of this thing, but not before we make ourselves an MP4 copy.

DVD region 2 is the designation for most of Europe, so in addition to
the region issue, most region 2 disks are formatted for PAL systems.
This particular movie was sped up approximately 1fps to match PAL
framerates, and stretched vertically to fill in the extra vertical space
on PAL screens. Since we’re going to be dealing with raw video streams
anyway, we’ll take the time to slow down and rescale the final product.

Linux wants for nothing when it comes to video processing applications.
The trick is to sort through them all and find the right tools for the
job. Everything we need is available in the Ubuntu repositories.

First, we set the region flag on the DVD drive to region 2. This can be
done with a simple program called **regionset**. Note that most DVD
drives only permit a limited number of changes to this setting. We’ll
run **regionset** and follow the prompts:

$ regionset

Next, we copy the VOB file from the DVD to the disk, so that we can
manipulate the audio and video streams. **vobcopy** will do the job:

$ vobcopy -l

This creates a file called ROCKETMAN1.vob in the current directory. At
this point we can eject the DVD and re-set the region flag. Everything
else will be based on this VOB file.

Our next step is to use **mplayer** to extract the audio from the VOB
file so that we can slow it down to match the new frame rate:

mplayer ROCKETMAN1.vob -ao pcm:file=rocketman.wav -vo null

Once that’s finished, we’ll start the rather long process of ripping and
reformatting the video:

ffmpeg -i ROCKETMAN1.vob \
-f rawvideo -pix_fmt yuv420p – 2>/dev/null \
| ffmpeg -f rawvideo -r 23.976 -s 720×576 -pix_fmt yuv420p -i – \
-vcodec libx264 -pass 1 -vpre slow_firstpass -b 768K -s 720×480 \
-threads 0 -y rocketman.mp4 \
&& ffmpeg -i ROCKETMAN1.vob \
-f rawvideo -pix_fmt yuv420p – 2>/dev/null \
| ffmpeg -f rawvideo -r 23.976 -s 720×576 -pix_fmt yuv420p -i – \
-vcodec libx264 -pass 2 -vpre slow -b 768K -s 720×480 \
-threads 0 -y rocketman.mp4

That monster command actually invokes two instances of **ffmpeg**; one to
feed the raw video stream from the VOB into a pipe, the second to read
in the raw stream, apply the new frame rate, then resize and encode the
result.

While that runs we can process the audio using **sox**. This particular
movie is a little quiet, so we’ll normalize the volume while we’re
changing the speed. Our speed ratio is based on the source and
destination frame rates. The PAL-formatted VOB stream is 25fps, and
our final product will be 23.976fps (an NTSC standard). 23.976/25 is
.95904, so that is our ratio:

sox –norm rocketman.wav rocketman.flac speed .95904

When all of this time-consuming processing is finished, we can take our
converted video and audio streams and reunite them (a process known as
*muxing*) using **ffmpeg**:

ffmpeg -i rocketman.mp4 -vcodec copy \
-i rocketman.flac -acodec libfaac -ab 128K \
rocketman.final.mp4

Notice that we’re copying the video stream instead of encoding it.
That’s because we already encoded it during the ripping process.
However, we still have to compress the audio, so we specify an audio codec and bitrate.

This last step is surprisingly quick, and when it’s finished, so are we.
Of course, we’ll confirm that everything was successful before we delete
our working files. But then we’re ready to sit back and enjoy some
ridiculous antics!

Shred Empty Drive Space

If you are familiar with the **shred** command, you know it is an easy way to make sure sensitive data is *really* deleted. Shred overwrites a file with random data before deleting it, so that the original data cannot be recovered. Shred works by overwriting the data *in place*, or over top of the original file. But what if the file has already been deleted?

One way to destroy the data is to overwrite all unused space on the drive (or partition) with random data. The simplest way to do this is to invoke the **dd** command to create a new file full of random data:

dd if=/dev/urandom of=somefile.tmp bs=1024

The above command will run until the drive (or partition) runs out of space, writing random bits to a file called *somefile.tmp*, 1 kilobyte at a time.

Depending on the amount of free space, this could run for a long time. Also, depending on how you’ve configured your partitions and mount points, it may cause stability issues as the drive approaches full capacity. If you plan on running this command and walking away, you may want to append a command to remove the file when finished, to prevent crashes or errors due to low storage space:

dd if=/dev/urandom of=somefile.tmp bs=1024; rm somefile.tmp

If you’re short on time and willing to settle for a less secure method, you can replace */dev/urandom* with */dev/zero*, which should read in data much more quickly:

dd if=/dev/zero of=somefile.tmp bs=1024

This method can help keep your data secure, but it should be used sparingly. Writing to an entire drive is hard on the media, particularly flash-based media such as USB drives and SSDs, which have a limited number of write cycles. Use this method with care. If you have a lot of sensitive data, you may want to consider encrypting your files before writing to disk. But that’s a topic for another post.

Migrating IMAP Accounts Between Remote Hosts

I have come to rely on IMAP for email access. It is the most convenient way to ensure access to all my mail from more than one computer. I recently switched to a new hosting provider and I didn’t want to lose all of my archived email stored on the old host’s server. Fortunately, I found a Perl script designed to synchronize IMAP mailboxes between two servers. The script is called **imapsync**.

I was able to install imapsync from the Ubuntu repositories. The package includes a man page which explains the options and contains some example use cases. My needs were simple–copy everything from point A to point B.

The first step is to create mailboxes for the users in question on the new server. I did this, preserving the same passwords as well as the same usernames. It is not necessary to use the same login/password on each server, but doing so allowed me to simplify the bash command line a great deal. Below are the commands I used to migrate two mailboxes, followed by an explanation of each command:

$ touch passwd_nick passwd_karie
$ chmod 600 passwd_*
$ echo “[HISPASSWORD]” > passwd_nick
$ echo “[HERPASSWORD]” > passwd_karie
$ for USER in nick karie; do
> imapsync \
> –host1 [SOURCE_IP] –user1 $USER –passfile1 passwd_$USER \
> –host2 [DEST_IP] –user2 $USER –passfile2 passwd_$USER \
> –ssl1 –ssl2 –noauthmd5
> done

The **touch** and **chmod** commands create empty files with the appropriate permissions to prevent other local users from viewing the contents. This may not be necessary on every machine, but it is a good practice to consider privacy when working with plaintext passwords. It is possible to pass plaintext passwords to imapsync using the arguments `–password1` and `–password2`, but they would then appear to any local user who executes `ps axww` for as long as the command runs.

The **echo** lines actually write the passwords to the empty files. Obviously, I used placeholder text within the brackets.

The **for** statement iterates through the usernames *nick* and *karie*, and the bash interpreter reads these values wherever `$USER` is seen in the imapsync argument list. `[SOURCE_IP]` and `[DEST_IP]` are placeholders for the numeric IP addresses of each remote server.

The `–ssl1` and `–ssl2` flags enable SSL for their respective connections. This is almost certainly preferred for better security. If your servers support TLS you can use `–tls1` and `–tls2` instead. I also had to pass the `–noauthmd5` flag, as neither server in my situation supported that feature.

The script ran for a long time. In fact, I left it running and went to work. When I got home I ran it again to transfer whatever new messages had been delivered since the sync (there were 25). Then, confident that things were ready to go, I logged in to my domain registrar and updated the nameservers to reflect the new host. Last but not least, I opened up Thunderbird to see if it worked. Thunderbird didn’t even notice the switch. In fact, it worked so flawlessly that I actually had to perform a DNS search to make sure the nameservers had been updated properly.

As the name suggests, imapsync is designed to synchronize IMAP mailboxes. Migration is only a small part of its featureset. It may be wise to consider implementing this script into your normal backup routine; of course, this requires that a backup IMAP server be running.

While a number of utilities exist that can migrate mailboxes, the simplicity of imapsync makes it a great choice for occasional use.

SSH From the Inside

### Problem ###

I need SSH access to a particulr machine (*schoolsvr*) which is behind a NAT. I only need to enable access from a single client (*homesvr*), which has a public IP address of its own. Both machines are running **sshd**. I can access *homesvr* from a shell on *schoolsvr*, but not vise-versa.

If I had admin access on *schoolsvr’s* gateway, I could alter the NAT to forward some unused port (say, 12345) to *schoolsvr:22*, which would allow me to SSH to *schoolsvr* using the gateway’s public IP and port 12345. Unfortunately, I don’t have admin access to the gateway.

How do I enable SSH access to *schoolsvr*?

### Solution ###

The solution is to open an SSH tunnel from *schoolsvr*, which I can access from a shell on *homesvr*. To achieve this, I use the OpenSSH client program’s `-R` option to bind an SSH tunnel to a non-standard port on *homesvr*. Consider the following command:

nick@schoolsvr$ ssh -R 12345:localhost:22 nick@homesvr
nick@homesvr’s password:
nick@homesvr$

This command connects to *homesvr* via the standard SSH port (22) and binds that connection to the specified bind port (12345). This port remains bound until the SSH session is terminated. Now all SSH traffic directed to port 12345 on *homesvr* will be forwarded to port 22. When I get back to *homesvr*, I can open a new SSH session with *schoolsvr* using the following command:

nick@homesvr$ ssh -p 12345 localhost
nick@localhost’s password:
nick@schoolsvr$

I’m in! I can terminate this session when I am finished, and the original tunnel remains open until I kill it on *schoolsvr*.

This command can be set up in */etc/inittab* (or an Upstart config file, depending on your system configuration) with the `respawn` action, which would ensure that the tunnel is open upon boot and will be automatically reopened upon termination. Note that such a setup requires the appropriate SSH keys to be configured on both machines, as an init process can’t enter a password.

Because each half of the connection is done using SSH, this setup is completely secure. Of course, anyone with physical access to *schoolsvr* would have full control over the open login to *homesvr*. To prevent this, I can modify the original command as follows:

nick@schoolsvr$ ssh -nNT -R 12345:localhost:22 nick@homesvr &

The `-n` option redirects standard input from */dev/null*. The `-N` option is specifically designed for port-forwarding applications such as this, and tells SSH not to bother preparing a command stream for this connection. The `-T` option tells the remote host not to bother allocating a pseudo-tty for this connection. These three options eliminate the possibility of using this open tunnel to execute any other processes on *schoolsvr*. Additionally, I appended an ampersand (`&`) to send the process to the background. Now I can close the shell in which I ran the command without killing the process.

### Conclusion ###

While not as elegant as a true NAT-based port forwarding solution, reverse SSH tunnels are a fast, secure way to connect two remote machines for general use. When used with discretion, they can be a real time-saver.

What do you think of this solution? Did I leave anything out? Let me know in the comments.

A Good SSD/HDD Partitioning Scheme

An SSD is a great investment. Data loads super fast and there are no moving parts to fail. But SSD storage space is expensive, and most users have a lot to store. A common solution is to install the OS to the SSD, and move personal data (the `/home` directory) to a secondary HDD. While this is the easiest way to take advantage of SSD speed, the results are less than ideal. SSDs have a limit on write cycles so it is wise to minimize disk write operations. My preferred solution offloads some of the more volatile areas of the Linux filesystem to the HDD as well.

The table below shows how I partitioned my 32GB SSD (`/dev/sda`) and my 320GB HDD (`/dev/sdb`):

Partition Size Type Mount Point
/dev/sda1 512M ext4 /boot
/dev/sda3 23.5G ext4 /
/dev/sda5 8G ext4 /usr

/dev/sdb1 4G ext4 /var
/dev/sdb2 4G swap
/dev/sdb5 192G ext4 /home
(120GB Unpartitioned space on /dev/sdb)

This scheme takes advantage of the SSD’s speed in areas that matter, while making sure to maximize its lifespan by minimizing write cycles. Below we’ll take a look at each partition in more detail.

### The `/boot` Directory ###

A long-standing convention states that `/boot` should be its own small partition at the front of the disk. This goes back to old BIOS limitations which no longer apply. Nevertheless, I prefer to maintain this convention because it’s familiar and logical. Some Linux admins keep `/boot` on its own partition so that it can remain unmounted by the running system for security reasons. This is possible because the files in `/boot`, while accessed by the bootloader, are generally ignored by the kernel and other processes. If you do this, it is good to leave a `/boot` entry in `/etc/fstab` so that the partition can be easily mounted for such tasks as bootloader configuration and kernel image updates, but append the `noauto` option to prevent the kernel from automounting the partition on boot.

**Note:** Before you decide to leave your `/boot` directory unmounted, consider the following. Many package managers include kernel updates as part of their normal update process. If your package manager does this, it will likely write the updated kernel image to the empty placeholder `/boot` directory on the root partition, but GRUB will still try to read the kernel image from the `/boot` partition (remember that GRUB uses its own syntax to refer to filesystems), and thus will fail to find the appropriate kernel image. Don’t do this unless you know what you are getting into, or if your distribution is such that you build and install your own kernel images rather than relying on a package manager to do it.

It is worth pointing out that only very recent versions of GRUB support the ext4 filesystem, so you may want to use ext2/3 here instead.

### The `/` (Root) Directory ###

It seems obvious that the root directory should be on the SSD–it is the operating system. If this directory were moved off the SSD, the upgrade would be rather pointless. Locating the root partition on the SSD will result in fast booting and loading of programs. Moving on…

### The `/usr` Directory ###

The `/usr` directory stores most of the binaries and global program files on a Linux system. I prefer to mount `/usr` as its own partition. Other than `/home` and `/var` (as well as `/srv` on some machines), it is the only directory of substantial size in the Linux filesystem. Partitioning it off can help make the process of imaging disks for backup more efficient (i.e. an admin may not want to back up `/usr` as often as the rest of the filesystem–it only changes when packages are upgraded).

### The `/var` Directory ###

The `/var` directory is probably the most volatile directory on any given Linux machine. That’s because it’s where log files are stored. Moving this directory to a standard disk drive can save a significant number of writes to the SSD. While it is true that the writing of log data will take longer this way, the Linux kernel uses advanced I/O caching (writing files to RAM until the CPU has free time to write them to disk), so there is no noticeable decrease in performance.

### Swap Space ###

It may be tempting to move swap space to the SSD, since it would make swap operations much faster. But SSDs are about gaining high performance, not making up for bad performance. If a machine has so little memory that the OS is forced to use swap space frequently, the admin is better served to spend his upgrade dollars on more RAM before springing for an SSD. For this reason, and because those 4GB of SSD space are precious, it is wise to put swap space on the HDD.

### The `/home` Directory ###

The `/home` directory is the conventional location for the storage of documents, photos, music, videos, and other personal files. It is also the location of user-based configuration settings. When a user launches an application, that user’s personal settings are loaded from the `/home` directory. This leads some to suggest that putting `/home` on the SSD will speed up load times. Whereas this may be the case, most of the settings are stored in simple, tiny, text-based config files, and any difference in program load times would be completely indistinguishable by the user. Rather than complicate matters by partitioning subdirectories of `/home`, it is more sensible to create a separate `/home` partition on the HDD and be done with it.

### Other Considerations ###

The scheme described above is limited in scope. Below are some other things to consider when designing a partitioning scheme.

#### `/srv` ####

Servers often store content in the `/srv` directory instead of `/var`. While `/srv` is a rather new convention to the Linux FS hierarchy, it is becoming quite popular. If your distribution uses `/srv`, it is probably a good candidate for partitioning off to the HDD.

#### `/lib` ####

Shared libraries, stored in `/lib`, are often accessed by programs on load, and thus should remain on the SSD. But because the `/lib` directory can become modestly large (although usually much smaller than `/usr` or `/var`), some admins prefer to create a separate `/lib` partition on the SSD for much the same reason as `/usr`–that is, to make backups more efficient. There is nothing wrong with this decision. It is simply a matter of preference.

#### Unpartitioned Space ####
Many admins prefer to reserve unpartitioned space on a disk in order to accommodate unforeseen circumstances. As long as the existing partitions have plenty of room for growth, this can be a wise decision. A good rule of thumb is to make each partition twice as big as you think you need, and to leave the remaining space unpartitioned.

### Conclusion ###

While no single partitioning scheme will suit every machine, the scheme described above is a good starting point. What would you do differently? Let me know in the comments.

Printing in Booklet Format from the Linux Command Line

Here is a Linux command to take a PDF of half-letter-sized pages
(5.5″x8.5″) and arrange them, two pages per side, for booklet printing:

$ pdftops -level3 source.pdf – | psbook | psnup -2 -W5.5in -H8.5in | ps2pdf – booklet.pdf

All of these utilities are standard on a Debian-based system. Several filters are tied together using [pipes](http://en.wikipedia.org/wiki/Pipeline_%28Unix%29 “Learn more about Unix Pipelines”) to produce a properly-formatted PDF.

### A Closer Look ###

Here’s a pipe-by-pipe breakdown of the command:

$ pdftops -level3 source.pdf

convert the source PDF to a level 3 postscript file (this is necessary
because the remaining steps require a postscript source file).

| psbook

rearrange the pages from the resulting postscript source file into the
appropriate order for notebook printing.

| psnup -2 -W5.5in -H8.5in

read in the reordered source file and arrange the reordered pages so
that two logical pages are printed on one physical sheet. If your
source pages are a different size, change the values of the `-W` and
`-H` options. You can use the `-p` option (or lowercase `-w` and `-h`)
to specify the output page size, but on Debian-based systems this value
is automatically read from `/etc/papersize`.

| ps2pdf – booklet.pdf

convert the resulting postscript file back into a PDF using the
filename specified. This step is not necessary if you know that the
machine you will print from can handle postscript files. If you leave
this step out, replace it with `> booklet.ps` to write the output to
a file named `booklet.ps`.

This method comes in handy because you don’t have to rely on printer
driver options to handle the booklet reordering. In my experience,
those driver options are terribly fussy and hard to work with.
Further, many printer drivers don’t offer this ability.

I hope you find this info helpful.

Based on info from
[Scribus Wiki](http://wiki.scribus.net/index.php/How_to_make_a_booklet#Method_A-1:_using_psutils_only).

**Update:** I experienced a bug in postscript that was causing a single
page to be rendered upside-down (rotated 180 degrees) when converted to
PDF. Unable to trace the source of the bug, I did find out that
filtering the postscript through `ps2ps` immediately after the initial
conversion from PDF, fixed the problem. `ps2ps` is a postscript
“distiller” which optimizes postscript into simpler form. This does
result in the loss of anti-aliasing, which uglies up the result on the
screen, but it does not affect printed copies.

If you experience rendering bugs, try adding `ps2ps`, as in the
following command:

$ pdftops -level3 source.pdf – | ps2ps – – | psbook | psnup -2 -W5.5in -H8.5in | ps2pdf – booklet.pdf

Strong Password Generator in Python

I wanted a CLI-based strong password generator for Linux. `mkpasswd` is
nice, but I wanted something more flexible. I didn’t like having to
provide my own character sequence. I wanted something with a built-in
character sequence generator with an easy way to control the likelihood
of numbers and symbols. I wanted something that could read in a
character sequence from a file or standard input. And I wanted
something that not only had sensible default values, but was easily
configurable. It was one of those times where searching for such a tool
was a bigger hassle than writing my own. So I opened up vim and got to
work. The result is a Python-based password generator called
`mkstrongpw`.

`mkstrongpw` can generate a character sequence based on four character
categories: lowercase letters, uppercase letters, numerical digits, and
non-alphanumeric symbols. By default, the likelihood (or odds) of each
character type is as follows: lowercase: 5/10; uppercase: 2/10; digits:
2/10; symbols: 1/10. These values can be easily changed via command
line options. A single, non-option argument can be passed, specifying a
filename to be read in as the character sequence. If this argument is a
single hyphen (-), standard input is used. Newlines and whitespace are
stripped from the beginning and end of each line of the input file.

By default, `mkstrongpw` uses its built-in sequence generator to
produce 10 passwords, 8 characters in length. Command line options can
be used to modify these values. Take a look at the following examples:

Default operation (no arguments):

$ mkstrongpw
x5mHpZum
zc{onlKH
O5WlkYaV
i&hgb?pX
cjw0Gjni
aVtxjuve
aDeVHym8
1SDWKhEr
U3Kja!/dev/null | uuencode – | mkstrongpw –
+DTS@]B\
C>R60’&9
6X])H$(,
GP7_^9\(
FM;^)84F
E)RGR4+[
`<1^_NT_ 8":\7%R- DJSLJ*I` =MQ^]2D9 That's a broad overview of `mkstrongpw`. Use the `--help` option for more details. ### Get it ### * [Latest version (github)](https://github.com/njbair/mkstrongpw) * [Download mkstrongpw 1.0 (.tar.gz)](http://nickbair.net/files/mkstrongpw.tar.gz) (md5sum: 34725c0509e0d0d9c5ef7441c8b5088c) * [View source](http://nickbair.net/files/mkstrongpw)

Linux Server: to Reboot or Not to Reboot?

Linux servers have a reputation as workhorses. Since very early in the
development of Linux, its users have boasted in the stability of the
OS. In fact, it is not uncommon to hear of Linux-based servers running
for years without the need for a reboot. This raises the question: how
often should you reboot your Linux server?

Months and months of server uptime can be a good thing (and for some,
even cause for boasting), but is it wise to go such a long time without
rebooting? I would strongly argue that it is not. In fact, a wise
server recovery/contingency plan will include reboots as part of a
regular maintenance schedule. Below I outline some reasons why you
should reboot your server on a regular basis.

### Kernel Upgrades ###

The Linux kernel is under constant development. New drivers are always
being written, old ones are rewritten, bugs are patched, and security
holes are plugged. These upgrades generally result in a system that is
faster, safer, and more reliable. Package managers upgrade the kernel
regularly in most distributions. But even if your distribution doesn’t
automatically upgrade your kernel, for the aforementioned reasons you
should make it a point to do so periodically.

In order for the upgraded kernel to run, the system needs to be
rebooted. Some distros notify the user when a reboot is required, but
it is ultimately the responsibility of the sysadmin to know what
software is being upgraded and what actions those upgrades require.

### Real-World Reliability Testing ###

Any sysadmin who has been at it for a while has experienced this
scenario:

Something happens that causes the server to shut down–perhaps a
hardware addition/replacement, power loss, or the need to move the
machine. Once the interruption is over, the admin boots the server only
to find that things aren’t working as they should. Some critical
service failed to start properly. What happened? As software packages
are updated and new versions are released, many variables come into
play that affect normal operation of that software. A configuration
setting might become deprecated. A hack that was used to fix a bug in
an old version, may render the new version useless. The list goes on.

As the time between reboots increases, so does the likelihood that some
service will not initialize properly. These errors take time to
diagnose and correct, which translates to unacceptable server downtime.
This problem is compounded when two or three issues occur on a single
reboot. Rebooting on a regular schedule allows the sysadmin to catch
these types of errors quickly. It also provides time to correct the
errors without workflow grinding to a halt, as users are informed ahead
of time that the server will be down for maintenance.

While it is true that services can be restarted individually, nothing
can accurately simulate a full reboot. And the longer you wait between
reboots, the greater the chance of something going wrong. Remember:
*You will never experience a routine reboot until you implement a
reboot routine.*