Hebrides: How to Build a Virtual Beowulf Cluster

by Maura Warner


What Is the Hebrides Cluster?

Hebrides is a virtual Beowulf Cluster that currently resides in one of the research labs in the science building at Macalester College. It is a small cluster, with only six nodes, counting the master. Every node is a virtual machine, powered by VirtualBox, running on a different Linux box. The host machines are quad-core, and each VM takes over two of those cores for the cluster's use. We do not have a special network for the cluster: it does not have its own ethernet switch and set of cables. Rather, our nodes simply communicate over the LAN in the science building. The machines are already all connected to the same network; the cluster piggybacks on top of the pre-existing infrastructure. Every virtual machine has its own static IP address, making this communication possible. Hebrides runs Torque, an open source resource manager and job scheduler provided and maintained by Cluster Resources, Inc. All of our nodes are, at the time of the installation, running Ubuntu 10.04 Lucid Lynx.

Installing & Configuring VirtualBox

Downloading VirtualBox
Go to VirtualBox.org > Downloads and choose the correct download for your operating system and machine architecture. VirtualBox will run you through an installation wizard; it's very self-explanatory. For more precise instructions, see the User's Manual here: http://www.virtualbox.org/manual/ch02.html

Creating your virtual machines

VirtualBox will run you through a setup wizard to create your Virtual Machines. Be advised: they do not come with the operating systems you desire automatically installed! This means that if you want to install Windows XP on a VM, you are going to have to acquire a copy of Windows XP to install. I installed Ubuntu on my VMs by downloading a .iso image of the Ubuntu installation CD-ROM from their web site and mounting it using the Ubuntu Archive Mounter to make it accessible to my VM. To install an operating system using a mounted .iso image, boot up your VM. It will give you an error message to the effect of, "FATAL! No bootable drive available!", which is okay. Go to the Devices > CD/DVD Devices, select your mounted CD image, and reboot the VM. (Reboot it using Machine > Reset. This is a hard reboot, which is fine because there is no operating system to mess up.) Your VM will run the installation procedure on your mounted .iso file, and your operating system should install without a hitch. However, if you have made your virtual hard drive too small, the operating system will complain, so be careful!
Now, a few nitty-gritty notes on your VMs. As the VirtualBox manual will tell you, when you first create a VM, it comes without the 'Guest Additions'. These additions do a bunch of things to improve or augment the functionality of your VM. You will see your first concrete evidence of this when you create your first VM, and voila, your screen is 800x600 pixels no matter what you do to the display settings. In order to make your screen fit your monitor, you must download the Guest Additions. The VirtualBox manual will tell you how to do this. (When you have the Guest Additions on your machine, install them using the Autorun prompt.)

Setting your node VM to run at startup

I spent a long time trying to get this to work. Lucky for you, this mostly took me forever because I am unfamiliar with UNIX and don't know anything about shell scripts -- and happily, now you don't need to (yet). There is actually an excellent tool that some other upstanding citizen wrote, which I am using and which works very well for the VMs. It's a bit hack-ish, but nevertheless effective. It's called vboxtool, and you can download it here: http://vboxtool.sourceforge.net/
He provides installation instructions in a README file that you download along with the scripts. Follow his instructions to the letter and the script will work. Be advised: I couldn't get it to work because I put two vbox_users in vboxtool.conf. Do not do this. Use your future cluster administrator account (or you could do a dry run, like I did, but be sure to use the cluster administrator account on your host machine when you do it for real or you'll be sorry). You can only put one vbox_user into vboxtool.conf or they will overwrite each other and break vboxtool.
Our host machines have been set up to run the VMs off of their second, 1 terabyte hard drives. These are secondary drives, added manually after we bought the computers, and do not automatically mount at boot. You must configure these second hard drives to mount at boot, or vboxtool won't be able to auto-start your VMs. To do this, follow the (very clear and simple) instructions in this blog entry: http://www.nyutech.com/2009/03/make-ubuntu-mount-partitions-and-drives.html In case that entry has been eaten by the archive monster by the time you read this, here is a short version for your reading pleasure:
  1. Identify your drives. Enter sudo fdisk -l in your Terminal window. This did not work perfectly for me -- I partitioned my second hard drive (sdb -- the exact name will depend on your operating system) so I was running my VM's hard drive from /sdb1. However, sdb1 didn't show up when I ran this command, probably because it wasn't mounted. If you need to do this step, go for it, although it may give you some trouble (so don't be afraid to experiment with mounting sdb1, sdb2, etc depending on how many partitions you have on the drive); otherwise, it's probably easier to just know the name and filepath of your partition.
  2. Create mount points. I mounted my partition in /media. You can sudo mkdir or you can open up a nautilus session (gksudo nautilus) and do it through the GUI. Create a folder in /media (or, you know, wherever) to which you will mount your partition. I created /media/cluster/ on every single one of my host machines.
  3. Edit your mount table. In other words, you need to edit /etc/fstab. You need root privileges to do this, so open the file by typing sudo gedit /etc/fstab into your command line. (Substitute your preferred text editor -- e.g., emacs, nano, etc for gedit if you like.) You will need to create a new line for every drive or partition that you want to mount at startup, containing the following information: name of drive, mount point, format of the disk, options, dump, and pass. My drive was named /sdb1; my mount point was the file I made in step 2 -- /media/cluster/; the format of my disk was the format I chose when building my partition (ext3 in this case). For options, I put defaults. For dump and pass I put 0 and 0. Ultimately, your line will want to look like this:
    • /sdb1 /media/cluster/ ext3 defaults 0 0
  4. Save and close /etc/fstab. If you've done this right, you should be finished. Test it out by typing sudo mount -a into your command line. If the drives you specified in fstab mounted, then you're done!

Now your node runs in the background at boot! Yay!

Adding VMs to the LAN -- Static & Dynamic IP Addresses

You need your VMs to be connected to the LAN if you want them to be able to talk to each other. Without this connection, you are not going to be able to make a cluster without forwarding about 4000 ports (literally) per virtual machine. It's not pretty. So we don't do that -- we bridge.

VirtualBox and Bridging

VirtualBox has an automatic network bridging interface and will do a lot of work for you. With your VM powered off, open its Settings menu and go to Settings > Network. Check the box marked 'Enable Network Adapter'. Under 'Attached To' select 'Bridged Adapter'. You will get a 'Name' drop-down menu. Select the name of your network connection (probably eth0). Click OK. Now boot up your VM. You should have an Internet connection, an IP address assigned by your DHCP server (type ifconfig on the command line to check this -- it will appear under eth0, marked as the 'inet address'), and you should be able to ssh to and ping other machines on your LAN.

Note that this automatically assigned IP address, while probably within a range, may not be static. It may change periodically depending on the other machines on your network and the whims of your DHCP server. Consult your system administrator on this point, but you will probably need to set up a static IP address for your VM.

If your VM does not have an Internet connection, check /etc/hosts. For whatever reason, it may not have all the information it needs to connect to the Internet on your LAN. Go to another computer -- one that is connected to the Internet -- and copy the relevant parts of its /etc/hosts file into the one on your Internet-less VM. Reboot the VM. It should now have Internet access.


Assigning Your VM a Static IP Address

This ought to be very, very easy. However, VMs are whimsical things, and this is Linux, so there are some odd bugs. The first thing that you need to do is edit /etc/network/interfaces in nano (or vi or emacs or whatever). This is a text file containing some commands telling your computer what to do with its various network connections. Pull up the file in the command window, and add the following lines:

auto eth0
iface eth0 inet static
address 141.140.167.226
netmask 255.255.255.0
network 141.140.0.0
broadcast 141.140.167.255
gateway 141.140.167.254
dns-nameservers 141.140.1.4

Your IP addresses will obviously be different from mine. Consult your system administrator about assigning a static IP to VMs on your LAN. You will be able to find the other IP addresses using ifconfig (netmask, broadcast), route (gateway), cat /etc/resolv.conf (dns-nameserver). Your network address should be X.X.0.0, the X's being the first two numbers of every IP address on your LAN. If you are having trouble finding this information, consult your system administrator. (The address field you assign to the VM yourself -- that is the static IP address.)
The next step is to reset your ethernet connection, bringing these changes into effect. Type sudo ifdown eth0, then sudo ifup eth0. In a perfect world, this will just shut eth0 down and then bring it back up. You may get a couple of different errors. In response to ifdown eth0, you may learn to your surprise that eth0 is unknown or not configured. This is because eth0 is already down. Ignore it. In response to ifup eth0, you may get a very weird error about /etc/postfix/main.cf or similar. Postfix is an old Linux intra-system email protocol. To get rid of this error, run sudo dpkg-reconfigure postfix. Set postfix up however you want -- you will never use it. Do NOT select "No Configuration" in the menu, as that will not reconfigure anything. After reconfiguring postfix, try running ifup again -- it should work. Your VM should now have all the network access that it did before, but its IP address will now be static.

Installing Torque

Installing Torque on Ubuntu 10.04 Lucid Lynx

Ubuntu and Torque are not the best of friends. Torque is a job scheduler and resource manager, the crucial piece of software to get your cluster up and running. Unfortunately, most versions of Torque will not compile on Ubuntu. The version of Torque available through Synaptic (also through apt-get, presumably) is Torque 2.3.6, which will compile. One could theoretically apt-get Torque, but Torque requires that you create self-extracting packages to put on the compute nodes, and has a specific command for this, which you execute in the home directory (the place where you put the binaries). It may be possible to do this if you install Torque using apt-get, but we did not try it. Rather, we followed Torque's Installation Guide in their Admin Manual http://www.clusterresources.com/torquedocs21/1.1installation.shtml and downloaded the binaries for version 2.3.6 from their Downloads page http://www.clusterresources.com/downloads/torque/ (It is worth noting that Torque's developers seem to update their software roughly once a month, for some strange reason.) We tried to install 2.3.10, 2.4.8, and 2.5.1, none of which compiled. 2.3.6 did successfully compile from the binaries in the .tar.gz file -- presumably, this is the most recent version that will compile on Ubuntu, which is why it is available via Synaptic. Installing it from the .tar.gz file on Torque's website allows us to run make packages, which is not obviously executable when you apt-get the program. This creates the packages that we need to install on the VMs in order to run the cluster.

You make configure Torque with any specific options that you desire (see the Admin Manual for details on configuration options). We recommend configuring with --enable-syslog for enhanced logging, which will aid with debugging Torque.

Setting up the PBS Server

Torque's admin manual provides specific instructions for setting up the PBS server on the master node. We had some strange problems, mainly to do with administrative privileges, but also a few to do with ports not being open. Be sure to open all the ports listed in the next section. When in doubt, run everything as root. However, do not set your PBS manager to root -- make it the user account you are using to administer the cluster. If you are having trouble contacting your server daemon on the master host, check /etc/hosts. If the master host is not listed by name (not just as localhost) in /etc/hosts, corresponding to its own IP address, you will not be able to manage your server daemon. Note also that, in order to add execution hosts to your cluster, you must have them all listed by name with their IP addresses in the master host's /etc/hosts file. Likewise, each execution host must know the master host's name and IP address in /etc/hosts, or the nodes will not be able to communicate.

Getting Torque to connect to other machines

Make sure that you open the following ports on all nodes, exec and master alike:


1023
15001
15002
15003
15004

These are Torque's default ports for various of its daemons. Make certain that all of your nodes have them open, that all compute nodes allow the master node access through them, and that the master node allows all the compute nodes access likewise.

The self-extracting packages that you created using make packages are specifically designed for your non-master nodes. For compute nodes (as opposed to compute nodes that are also submit hosts) you only need to install the torque-package-mom file. The Admin Manual gives you directions to do this using dsh, but it works just as well to scp the file onto the computer you want and then enter into the command line: sudo /file/path/torque-package-mom ... .sh --install

This should work perfectly. (Be advised: you must install the .sh packages, not the .tar.gz packages, which will appear in a folder inside your installation directory called tpackages. Ignore them -- the .sh packages will be in the installation directory, and they are the self-extracting packages that you are meant to use on your compute nodes.)

Once the daemons are installed on the compute nodes, go back to the master node and follow the instructions in Torque's Admin Manual for adding new nodes. In brief:

  1. Add the relevant line to $TORQUE_HOME/server_priv/nodes (I made my own nodes file; my $TORQUE_HOME is /var/spool/torque/, because I specifically configured Torque that way; please see Installing Torque above for more information) -- see the Admin Manual for information on how to format the line.
    1. Note that the name of the node must be the machine's name: it must be affiliated with the machine's IP address in your /etc/hosts file.
  2. Run pbs_server.

The machines should connect! You can check this by entering pbsnodes on the master host command line. If your nodes appear as "down" in the output of pbsnodes, check their /etc/hosts files. They may not have the name and IP address of the master host, making them unable to contact it.


Setting up NFS

You will need NFS running in order to run Torque jobs. You must share a directory from your master node with all of your compute nodes -- this directory will contain the code that the compute nodes are to execute. You will need to start by editing your firewall (sudo ufw) to let makeNFS daemons into your VMs. See this link https://help.ubuntu.com/6.06/ubuntu/serverguide/C/network-file-system.html for more information.
NFS's portmapper daemon runs most of the NFS daemons on random ports. Unfortunately, the range of ports used is very large and all of them must be open on post the exporting and receiving machines in order for NFS to work. We are currently using an insecure, stopgap solution on Hebrides: we are allowing all of the exec hosts complete access to the master host's ports, and vice versa. We do not intend to maintain this as the status quo indefinitely, but until we come up with something better, we have executed the following command to allow the nodes through each others' firewalls:

sudo ufw allow from ip address


On the exec hosts, you only need to run this once, and the IP address must be that of the master host. On the master host, you will need to do this for every exec host, or your server will not be able to contact the exec hosts and vice versa.
Note that if you do this, you do not need to open specific ports to allow Torque to communicate between nodes.

Running Jobs

We haven't successfully run a job yet. Watch this space.