warning: Creating default object from empty value in /var/www/legroom_v3/htdocs/modules/taxonomy/ on line 33.

Display Colored Output in Shell Scripts

Most modern terminals* (xterm, Linux desktop environment terminals, Linux console, etc.) support ANSI escape sequences for providing colorized output. While I'm not a fan of flash for flash's sake, a little splash of color here and there in the right places can greatly enhance script output.

In Bash, I include the following functions in any script where I want colored output:

# Display colorized information output
function cinfo() {
	COLOR='\033[01;33m'	# bold yellow
	RESET='\033[00;00m'	# normal white
	MESSAGE=${@:-"${RESET}Error: No message passed"}
	echo -e "${COLOR}${MESSAGE}${RESET}"
# Display colorized warning output
function cwarn() {
	COLOR='\033[01;31m'	# bold red
	RESET='\033[00;00m'	# normal white
	MESSAGE=${@:-"${RESET}Error: No message passed"}
	echo -e "${COLOR}${MESSAGE}${RESET}"

This allows me to easily output yellow (cinfo) or red (cwarn) text with a single line in a script. Eg.:

cwarn "Error: operation failed"

If this message was output normally with echo and it was surrounded by a lot of other text, it might be overlooked by the user. By making it red, however, it's significantly more likely to stand out from any surrounding, "normal" output.

My most common use for these functions are simple status output messages. Eg., if I have a script or function that's going to do five different things and display output for each of those tasks, I'd like to have any easy way to visually distinguish each of the steps, as well as easily determine which step the script is on. So, I'll do something like this (from one of my system maintenance scripts):

# Rebuild packages with broken dependencies
cinfo "\nChecking for broken reverse dependencies\n"
revdep-rebuild -i -- -av
# Rebuild packages with new use flags
cinfo "\nChecking for updated ebuild with new USE flags\n"
emerge -DNav world

For more details, the Advanced Bash Scripting guide provides a detailed discussion on using ANSI escape sequences in scripts, both for color and other purposes. You can also find some additional info in the Bash Prompt HOWTO, as well as useful color charts on the Wikipedia page.

*Note: Traditional (read: old) Unixes generally don't support useful modern conveniences like this. If you regularly work with AIX or Solaris and the like, you may want to skip this tip.

Create Floppy Disk Images from within Linux

It's possible to create floppy disk images (IMG files) from withing Linux using native Linux utilities. Although you most likely won't have a very frequent need for this these days, one place where it can come in handy is when dealing with virtual machines. Emulators such as VirtualBox and VMware Player can mount virtual floppy images and present them to guest machines as physical disks, just as they can mount CD-ROM ISO images and present them as physical CDs.

Now again, there probably isn't a very widespread need to do this, but in my case I needed to be able to create floppy disk images for my Windows installation CD. I use a heavily customized installation CD with an answer file to automate Windows installation. Unfortunately, Windows XP is only capable of reading answer files from the CD itself (which doesn't work for me because I need to be able to change the file) or from a floppy disk. Newer versions of Windows, I believe, can read from USB drives, but as I only (and infrequently) run Windows inside a virtual machine, I don't have any great need to upgrade. Being able to easily generate floppy disk images containing updated answer files, etc. has been a huge help compared to keeping up with physical floppy disks, especially since my current desktop no longer supports a floppy drive. Now, I just point VirtualBox to the appropriate IMG files, and when I boot Windows (or the Windows installer) it'll see it as a normal floppy drive. Very handy.

In order to create floppy disk images, you'll need a copy of dosfstools installed. It should be available in most package repositories. Once installed, the following command does all the magic:

mkfs.vfat -C "floppy.img" 1440

You now have an empty, but valid, floppy disk image. In order to copy files to the image, you need to mount the image using the loop device:

sudo mount -o loop,uid=$UID -t vfat floppy.img /mnt/floppy

Note that the mount command must either be run as root or using sudo; the uid argument makes the mount point owned by the current user rather so that you have permission to copy files into it.

After you're finished copying files, unmount the image and you're done. You can now attach it to your emulator of choice as a floppy disk image. W00t.

To make things even easier, the following script automates the entire process; just pass it the directory containing all of the files you want copied to the floppy disk and it'll do the rest.

# Setup environment
FORMAT=$(which mkfs.vfat 2>/dev/null)
MOUNT=$(which mount 2>/dev/null)
shopt -s dotglob
# Verify binaries exist
[ ! -e "$FORMAT" ] && MISSING+='mkfs.vfat, '
[ ! -e "$MOUNT" ] && MISSING+='mount, '
if [ -n "$MISSING" ]; then
   echo "Error: cannot find the following binaries: ${MISSING%%, }"
# Verify arguments
if [ ! -d "$1" ]; then
   echo "Error: You must specify a directory containing the floppy disk files"
   DISK=$(basename "${1}")
# Load loopback module if necessary
if [ ! -e /dev/loop0 ]; then
   sudo modprobe loop
   sleep 1
# Create disk image
${FORMAT} -C "${IMG}" 1440
mkdir "${TEMP}"
sudo $MOUNT -o loop,uid=$UID -t vfat "${IMG}" "${TEMP}"
cp -f "${DISK}"/* "${TEMP}"/
sudo umount "${TEMP}"
rmdir "${TEMP}"
mv "${IMG}" .

Quick Domain Name / IP Address / MX Record Lookup Functions

Today's tip is once again focused on Bash functions (I have a whole bunch to share; they're just too useful :-) ). These are three quick and easy functions for performing DNS lookups:

ns - perform standard resolution of hostnames or IP addresses using nslookup; only resolved names/addresses are shown in the results

mx - perform MX record lookup to determine mail servers (and priority) for a particular domain

mxip - perform MX record lookup, but return mail server IP addresses instead of host names

Here are the functions:

# Domain and MX record lookups
#   $1 = hostname, domain name, or IP address
function ns() {
    nslookup $1 | tail -n +4 | sed -e 's/^Address:[[:space:]]\+//;t;' -e 's/^.*name = \(.*\)\.$/\1/;t;d;'
function mx() {
    nslookup -type=mx $1 | grep 'exchanger' | sed 's/^.* exchanger = //'
function mxip() {
    nslookup -type=mx $1 | grep 'exchanger' | awk '{ print $NF }' | nslookup 2>/dev/null | grep -A1 '^Name:' | sed 's/^Address:[[:space:]]\+//;t;d;'

And finally, some examples:

$ ns # forward lookup
$ ns   # reverse lookup
$ ns  # cname example
$ mx      # mx lookup
$ mxip    # mx->ip lookup

Bash Random Password Generator

Random password generators are certainly nothing new, but they, of course, come in handy from time to time. Here's a quick and easy Bash function to do the job:

# Generate a random password
#  $1 = number of characters; defaults to 32
#  $2 = include special characters; 1 = yes, 0 = no; defaults to 1
function randpass() {
  [ "$2" == "0" ] && CHAR="[:alnum:]" || CHAR="[:graph:]"
    cat /dev/urandom | tr -cd "$CHAR" | head -c ${1:-32}

I use this a good bit myself; it can be as strong (or weak) as you need, and only uses core Linux/UNIX commands, so it should work anywhere. Here are a few examples to demonstrate the flags:

$ randpass
$ randpass 10
$ randpass 20 0

Get BIOS/Motherboard Info from within Linux

It's possible to read the BIOS version and motherboard information (plus more) from a live Linux system using dmidecode. This utility "reports information about your system's hardware as described in your system BIOS according to the SMBIOS/DMI standard (see a sample output). This information typically includes system manufacturer, model name, serial number, BIOS version, asset tag as well as a lot of other details of varying level of interest and reliability depending on the manufacturer." It can be handy if you want to check the BIOS version of your desktop and you're too lazy to reboot, but it's far more useful when trying to get information about production servers that you simply cannot take down.

Simply run dmidecode (as root) to get a dump of all available information. You can specify --string or --type to filter the results. The dmidecode man page is quite thorough, so I won't rehash it here.

One extremely useful application that may not be immediately obvious is the ability to pull the system serial number. Let's say you need to call support for a particular server that can't be taken down, or that you may not even have physical access to. A vendor like Dell will always want the system serial number, and as long as you can login to the server you can obtain the serial number with dmidecode -s system-serial-number. This has saved me on a couple of occasions with remotely hosted servers.

A lot more information is available through dmidecode, so I definitely encourage you to check it out. To wrap things up, I'll leave you with this obnoxiously long alias:

alias bios='[ -f /usr/sbin/dmidecode ] && sudo -v && echo -n "Motherboard" && sudo /usr/sbin/dmidecode -t 1 | grep "Manufacturer\|Product Name\|Serial Number" | tr -d "\t" | sed "s/Manufacturer//" && echo -ne "\nBIOS" && sudo /usr/sbin/dmidecode -t 0 | grep "Vendor\|Version\|Release" | tr -d "\t" | sed "s/Vendor//"'

This will spit out a nicely formatted summary of the bios and motherboard information, using sudo so it can be run as a normal user. Example output:

$ bios
Motherboard: Dell Inc.
Product Name: Latitude D620
Serial Number: XXXXXXXX
BIOS: Dell Inc.
Version: A10
Release Date: 05/16/2008


Generic Method to Determine Linux (or UNIX) Distribution Name

A while back I had a need to programmatically determine the which Linux distribution is running in order to have some scripts do the right thing depending on the distro. Unfortunately, there doesn't appear to be one completely foolproof method to do so. What I ended up coming up with was a combination of techniques that combines querying the LSB utilities, distro release info files, and kernel info from uname. It'll take the most specific distro name it can find, falling back to generic Linux if necessary. It'll also identify UNIX variants as well, such as Solaris or AIX.

Here's the code:

# Determine OS platform
UNAME=$(uname | tr "[:upper:]" "[:lower:]")
# If Linux, try to determine specific distribution
if [ "$UNAME" == "linux" ]; then
    # If available, use LSB to identify distribution
    if [ -f /etc/lsb-release -o -d /etc/lsb-release.d ]; then
        export DISTRO=$(lsb_release -i | cut -d: -f2 | sed s/'^\t'//)
    # Otherwise, use release info file
        export DISTRO=$(ls -d /etc/[A-Za-z]*[_-][rv]e[lr]* | grep -v "lsb" | cut -d'/' -f3 | cut -d'-' -f1 | cut -d'_' -f1)
# For everything else (or if above failed), just use generic identifier
[ "$DISTRO" == "" ] && export DISTRO=$UNAME
unset UNAME

I include this code in my ~/.bashrc file so that it always runs when I login and sets the $DISTRO variable to the appropriate distribution name. I can then use that variable at any later time to perform actions based on the distro. If preferred, this could also easily be adapted into a function by having it return instead of export $DISTRO.

I've tested this on a pretty wide range of Linux and UNIX distributions, and it works very well for me, so I figured I'd share it. Hope you find it useful.

Delete Old Files ONLY If Newer Files Exist

I discovered recently that one of my automated nightly backup processes had failed. I didn't discover this until about a week after it happened, and though I was able to fix it easily enough, I discovered another problem in the process: all of my backups for those systems had been wiped out. The cause turned out to be a nightly cron job that deletes old backups:

find /home/backup -type f -mtime +2 -exec rm -f {} +

This is pretty basic: find all files under /home/backup/ that are more than two days old and remove them. When new backups are added each night, this is no problems; even though all old backups get removed, newer backups are uploaded to replace them. However, when the backup process failed, the cron job kept happily deleting the older backups until, three days later, I had none left. Oops.

Fortunately, this didn't end up being an issue as I didn't need those specific backups, but nevertheless I wanted to fix the process so that the cleanup cron job would only delete old backups if newer backups exist. After a bit of testing, I cam up with this one-liner:

for i in /home/backup/*; do [[ -n $(find "$i" -type f -mtime -3) ]] && find "$i" -type f -mtime +2 -exec rm -f {} +; done

That line will work great as a cron job, but for the purpose of discussion let's break it down a little more:

1. for i in /home/backup/*; do
2.     if [[ -n $(find "$i" -type f -mtime -3) ]]; then
3.         find "$i" -type f -mtime +2 -exec rm -f {} +
4.     fi
5. done

So, there are three key parts involved. Beginning with step 2 (ignore the for loop for now), I want to make sure "new" backups exist before deleting the older ones. I do this by checking for any files that are younger than the cutoff date; if at least one or more files are found, then we can proceed with step three. The -n test verifies that the output of the find command is "not null", hence files were found.

Step 3 is pretty much exactly what I was doing previously, ie., deleting all files older than two days. However, this time it only gets executed if the previous test was true, and only operates on each subdirectory of /home/backup instead of the whole thing.

This brings us neatly back to step 1. In order for this part to make sense, you must first understand that I backup multiple systems to this directory, each under their own directory. So, I have:


If I just use steps 2 and 3 operate on /home/backup directly, I could still end up losing backups. Eg., let's say backups for eveery thing except server1 began failing. New backups for server1 would continue to get added to /home/backup/server1, which means a find command on /home/backup (such as my test in step 2) would see those new files and assume everything just dandy. Meanwhile, server2, server3, etc. have not been getting any new backups, and once we cross the three day threshold all of their backups would be removed.

So, in step one I loop through each subdirectory under /home/backup, and then have the find operations run independently for each server's backups. This way, if all but server1 stops backing up, the test in step 2 will succeed on server1/, but fail on server2/, server3, etc,, thus retaining the old backups until new backups are generated.

And there you go: a safer way to cleanup old files and backups.