fighting the lack of good ideas

automatically returning a host to the unprovisioned server pool in hpsa

In conjunction with the customized PXE process I wrote about previously, it could be highly desirable to be able to return a server to the unprovisioned server pool in HP’s Server Automation.

This is a specifically-Linux procedure: though I’m sure something similar can be done with Windows*.

run an ad-hoc script against a target server that contains the following:

dd if=/dev/zero of=/dev/sda bs=512 count=1
sleep 1
nohup reboot

This will erase the MBR and partition table, and then reboot the server.

Before it reboots, however, you need to deactivate and delete the server from SA – otherwise it will not register correctly.

If you’ve already enabled (or not disabled) PXE booting, when it reboots, it will pick the default entry off the PXE menu, skipping the hard drive as there is no valid boot record available to it.

Why would you want to do this?

Well, let’s say you’re doing a lot of build testing (verifying ks.cfg or unattend.xml files, for example) – this could be useful.

Or, maybe you want to get your build process completely streamlined and you’re working with the MBC functionality in SA – again, rapid recycling of machines is highly desirable.

In a later post I’ll discuss freeing the VM from SA in the process (ie, removing it from the ESXi host to fully release resources).

*In fact, you may be able to run fdisk /mbr on a Windows server – but I haven’t tried.

storage strategies – part 4

Last time I talked about storage robustifiers.

In the context of a couple applications with which I am familiar, I want to discuss ways to approach balancing storage types and allocations.

Storage Overview

Core requirements of any modern server, from a storage standpoint, are the following:

  • RAM
  • swap
  • Base OS storage
  • OS/application log storage
  • Application storage

Of course, many more elements could be added – but these are the ones I am going to consider today. From a cost perspective, hardware is almost always the least expensive part of an application deployment – licensing, development, maintenance, and other non-“physical” costs will typically far outweigh the expenses of hardware.


RAM is cheap, comparatively. Any modern OS will lap-up as much memory as is given to it, so I always try to err on the side of generous.


After having found an instance where Swap Really Mattered™, I always follow the Red Hat guidelines which state that swap space should be equal to installed RAM plus 2 gigabytes. For example, if 16GB of RAM is installed, swap should at least equal 18GB. Whether swap should be on a physical disk or in a logical volume is up for debate, but do not chintz on swap! It is vital to the healthy operation of almost all modern systems!

Base OS

This will vary on a per-platform basis, but a common rule-of-thumb is that Linux needs about 10GB for itself. Windows 2008 R2 requests a minimum of 20GB, but 40GB is substantially better.

OS/application logs

Here is another wild variable – though most applications have pretty predictable log storage requirements. For example, HP’s Server Automation (HPSA) tool will rarely exceed 10GB in total log file usage. Some things, like Apache, may have varying log files depending on how busy a website is.


Lastly, and definitely most importantly, is the discussion surrounding the actual space needed for an application to be installed and run. On two ends of the spectrum, I will use Apache and HPSA for my examples.

The Apache application only requires a few dozen megabytes to install and run. The content served-up by Apache can, of course, be a big variable – a simple, static website might only use a few dozen megabytes. Whereas a big, complex website (like StackOverflow) might be using a few hundred gigabytes (in total with any dynamically-generated content which might be in a database, or similar)*.

The best way to address varying storage needs, in my opinion, is to use a robustifying tool like LVM – take storage presented to the server, and conglomerate it into a single mount point (perhaps /var/www/html) so that as content needs change, it can be grown transparently to the website.

Likewise, with HPSA, there are several base storage needs – for Oracle, initial software content, the application itself, etc. Picking-up on a previous post on bind mounts, I think it a Very Good Thing™ to present a mass of storage to a single initial mount point, like /apps, and then put several subdirectories in place to hold the “actual” application. Storage usage for the “variables” of HPSA – Software Repository, OS Media, Model Repository (database) – is very hard to predict, but a base guideline is that you need 100GB in total to start.

Choosing Storage Types

My recommendations

This is how I like to approach storage allocation on a physical server (virtual machines are a whole other beast, which I’l address in a future post) for HP Server Automation:

Base OS

I firmly believe this should be put on local storage – ideally a pair of mirror RAIDed 73GB drives (these could be SSDs, to accelerate boot time, but otherwise the OS, per se, should not be being “used” much. You could easily get away with 36GB drives, but since drives are cheap, using slightly more than you “need” is fine.


Again, following the Red Hat guidelines (plus my patented fudge growth factor), ideally I want swap to be on either a pair of 36GB or a pair of 73GB drives – not RAIDed (neither striping, nor mirroring swap makes a great deal of sense). Yes, this means you should create a pair of swap partitions and present the whole shebang to the OS.

OS/application logs

Maybe this is a little paranoid, but I like to have at least 30GB for log space (/var/log). I view logs to be absolutely vital in the monitoring and troubleshooting arenas, so don’t chintz here!


HPSA has four main space hogs, so I’ll talk about them as subheadings.


It is important that the database has plenty of space – start at 200GB (if possible), and present it as a logically-managed volume group, preferably made up of one or more [growable] LUNs from a SAN.

Note: Thin-provisioning is a perfectly-acceptable approach to use, by the way (thin provisioning present space as “available” but not yet “allocated” from the storage device to the server).

Core application

The application really doesn’t grow that much over time (patches and upgrades do cause growth, but they are pretty well-defined).

Since this is the case, carve 50-60GB and present it as a [growable] LUN via LVM to the OS.

OS Media

Depending on data retention policies, number of distinct OS flavors you need to deploy, and a few other factors, it is a Good Idea™ to allocate a minimum of 40GB for holding OS media (raw-copied content from vendor-supplied installation ISOs). RHEL takes about 3.5GB per copy, and Windows 2008 R2 takes about the same. Whether this space is presented as an NFS share from a NAS, or as a [growable] LUN under an LVM group from a SAN isn’t vitally-important, but having enough space most certainly is.

Software Library

This is truly the biggest wildcard of them all – how many distinct packages do you plan to deploy? How big are they? How many versions needs to be kept? How many target OSes will you be managing?

I prefer to start with 50GB available to the Library. But I also expect that usage to grow rapidly once the system is in live use – Software Libraries exceeding 300GB are not uncommon in my field. As with the OS Media discussion, it isn’t vitally-important whether this space is allocated from a NAS or a SAN, but it definitely needs to be growable!

Closing comments (on HPSA and storage)

If separate storage options are not available for the big hogs of SA, allocating one, big LVM volume (made up of LUNs and/or DAS volumes), and then relying on bind mounts is a great solution (and avoids the issue of needing to worry about any given chunk of the tool exceeding its bounds too badly – especially if other parts aren’t being as heavily-used as might have been anticipated).

*Yes, yes, I know – once you hit a certain size, the presentation and content layers should be split into separate systems. For purposes of this example, I’m leaving it all together.

binding your mounts

Over the past several years, I have grown quite fond of the ability to do bind mounts on Linux.

First, a little background. Most applications have specific directory structure requirements. Many have wildly varying space requirements depending on how a given install is utilized. For example, HPSA can use anywhere from 40-400-4000 gigabytes of space for its deployable Software Library, depending on a given customer’s individual needs. Likewise, the database can be quite small (under 15GB), or massive (exceeding 1TB) – again depending on usage patterns and use cases. Because of varying space and directory requirements, an old sysadmin trick is to use symbolic links to create fake directories so that an application can take advantage of the structure it needs, but the admins can keep their file systems structured they way they prefer (sometimes this is a side effect of various corporate IT standards).

This is cool because it allows all logs for all non-standard applications to be housed in a common locale.

The drawback is that if the application checks on the location of its logs and they’re not *really* at /var/log/appname, it may fail to start. When you look at the details of a symlink, it shows that it is merely a pointer to a different place, and is not, in fact, a directory. Eg, if you have a symlink at /var/log/appname that really points to a directory at /apps/logs/appname, a symlink does not have the first bit set to ‘d‘, because it is not a directory, is is set to ‘l‘. That can be a problem.

Without creating separate partitions for each application’s logs, after all that was why we have /apps/logs created, how can the dilemma be solved?

Enter mount --bind. Bind mounts take an existing directory path (which might be a mount point itself), and remounts it to a new location. Like this: mount --bind /apps/logs/appname /var/log/appname.

This also effectively treats the ‘from’ path as a partition.

And, since it *is* a directory, when the application checks on the location of its logs, it will not fail.

When combined with growable space (the subject of a future post) at /apps, this provides a very flexible approach to varied storage requirements for applications.

The final component of properly utilizing bind mounts is to add the bind to the file system table, fstab (/etc/fstab):

/path/to/orig /path/to/new bind rw,defaults 0 0

Specifically in the context of the tool I run on a day-to-day basis, HP’s Server Automation, here is what I will tend to do for a simple install, as it allows the flexibility to have any given sub-set of the product use as much space as it needs, without being overly tied to specific partitions and mounts points:

/dev/{device-name} /apps/hpsa ext3 defaults 0 0
/apps/hpsa/u01 /u01 none bind 0 0
/apps/hpsa/u02 /u02 none bind 0 0
/apps/hpsa/u03 /u03 none bind 0 0
/apps/hpsa/u04 /u04 none bind 0 0
/apps/hpsa/varoptoracle /var/opt/oracle none bind 0 0
/apps/hpsa/etcoptopsware /etc/opt/opsware none bind 0 0
/apps/hpsa/varlogopsware /var/log/opsware none bind 0 0
/apps/hpsa/varoptopsware /var/opt/opsware none bind 0 0
/apps/hpsa/optopsware /opt/opsware none bind 0 0
/apps/hpsa/media /media/opsware none bind 0 0

Next time I’ll cover strategies for storage allocation.