In follow-up to a friend’s blog post on TrueCrypt, and in conjunction with some previous investigation and interests I have had, I am wondering how difficult it would be to run a tool like MooseFS in conjunction with TrueCrypt to provide a Wuala-like service as a plausibly-deniable data haven a la Cryptonomicon.
I’ve been attempting to understand how hard disk cache sizes affect performance recently (and whether it’s worth shelling-out about twice as much for a drive with 128MB vs one with just 64MB).
What would be the best way to personally investigate the performance differences to help determine which is better (if there’s even a noticeable difference)?
What benchmarking tools are best suited for such a task?
As promised, some follow-up to OLF.
In short, MooseFS provides better configurability than either Ceph or GlusterFS, runs with lower overhead, and provides more flexibility for their environment.
One of the other cool things Chris said was they adopted the BackBlaze StoragePod design for their “green monster” storage units. Nice to see open source being leveraged not just in software, but hardware, too 🙂
I have yet to find a peer-to-peer file storage system.
You’d think that with all the p2p and cloud services out there, there’d be a way of dropping files into a virtual folder and having them show up around the network (encrypted, of course) – replicated on some kinda of RAID-over-WAN methodology.
I’d use it if it were built.
Recently a new proposal has been made for Digital Preservation. Many of the proposed questions are interesting (including one of mine) – and I would strongly encourage anyone interested in the topic to check it out.
The topic has resparked a question I have had for a long time – why is it important to archive data?
Not that I think it’s inherently bad to hold onto digital information for some period of time – but what is the impetus for storing it more-or-less forever?
In tech popculture we have services like Google’s gmail which starts users at a mind-boggling 7+ gigabytes of storage! For email! Who has 7GB of email that needs to be stored?! For a variety of reasons, I hold onto all of my work email for the duration of my employment with a given company – you never know when it might be useful (and it turns out it’s useful fairly frequently). But personal email? Really? Who needs either anywhere near that much, or to hold onto it for that long? And those few people who arguably DO need that much, or to keep it forever, can afford to store it somewhere safely.
I think there is a major failing in modern thinking that says we have to save everything we can just because we can. Is storage “cheap”? Absolutely. But the hoard / “archive” mentality that pervades modern culture needs to be combated heavily. We, as a people, need to learn how to forget – and how to remember properly. Our minds are, more and more, becoming “googlized“. We have decided it’s more important to know how to find what we want rather to know it. And for some things, this is good:
If you are a machinist, is it better to know how to reverse-thread the inside of a titanium pipe end-cap, or to go look up what kind of tooling and lathe settings you will need when you get around to making that part? I suppose that if all you ever do in life is mill reverse-threaded titanium pipe end-caps, you should probably commit that piece of information to memory.
But we need to remember to forget, too:
when you need to make two of these things. Ever. In your entire life. In the entire history of every company you ever work for. Well, then I would say it’s better to go look up that particular datum when you need it. And then promptly forget it.
The historical value, interest, and amazing work that is contained in the “Domesday Books” is amazing – and something that has been of immense value to historians, archivists, politicians, and the general public. Various and sundry public records (census data, property deeds, genealogies, etc) are fantastic pieces to hold onto – and to make as available and accessible as possible.
Making various other archives available publicly is great too (eg the NYO&WRHS) – and I applaud each and every one of those efforts; indeed, I contribute to them whenever I can.
I continuously wonder, though, how many of these records and artifacts truly need to be saved – certainly it is true of physical artifacts that preservation is important, but how many copies of the first printing of Moby Dick do we need (to pick an example)?
I don’t know what the best answer is to digital hoarding, but preservation is a topic which needs to be considered carefully.
Last time I talked about storage robustifiers.
In the context of a couple applications with which I am familiar, I want to discuss ways to approach balancing storage types and allocations.
Core requirements of any modern server, from a storage standpoint, are the following:
- Base OS storage
- OS/application log storage
- Application storage
Of course, many more elements could be added – but these are the ones I am going to consider today. From a cost perspective, hardware is almost always the least expensive part of an application deployment – licensing, development, maintenance, and other non-“physical” costs will typically far outweigh the expenses of hardware.
RAM is cheap, comparatively. Any modern OS will lap-up as much memory as is given to it, so I always try to err on the side of generous.
After having found an instance where Swap Really Mattered™, I always follow the Red Hat guidelines which state that swap space should be equal to installed RAM plus 2 gigabytes. For example, if 16GB of RAM is installed, swap should at least equal 18GB. Whether swap should be on a physical disk or in a logical volume is up for debate, but do not chintz on swap! It is vital to the healthy operation of almost all modern systems!
This will vary on a per-platform basis, but a common rule-of-thumb is that Linux needs about 10GB for itself. Windows 2008 R2 requests a minimum of 20GB, but 40GB is substantially better.
Here is another wild variable – though most applications have pretty predictable log storage requirements. For example, HP’s Server Automation (HPSA) tool will rarely exceed 10GB in total log file usage. Some things, like Apache, may have varying log files depending on how busy a website is.
Lastly, and definitely most importantly, is the discussion surrounding the actual space needed for an application to be installed and run. On two ends of the spectrum, I will use Apache and HPSA for my examples.
The Apache application only requires a few dozen megabytes to install and run. The content served-up by Apache can, of course, be a big variable – a simple, static website might only use a few dozen megabytes. Whereas a big, complex website (like StackOverflow) might be using a few hundred gigabytes (in total with any dynamically-generated content which might be in a database, or similar)*.
The best way to address varying storage needs, in my opinion, is to use a robustifying tool like LVM – take storage presented to the server, and conglomerate it into a single mount point (perhaps
/var/www/html) so that as content needs change, it can be grown transparently to the website.
Likewise, with HPSA, there are several base storage needs – for Oracle, initial software content, the application itself, etc. Picking-up on a previous post on bind mounts, I think it a Very Good Thing™ to present a mass of storage to a single initial mount point, like
/apps, and then put several subdirectories in place to hold the “actual” application. Storage usage for the “variables” of HPSA – Software Repository, OS Media, Model Repository (database) – is very hard to predict, but a base guideline is that you need 100GB in total to start.
Choosing Storage Types
This is how I like to approach storage allocation on a physical server (virtual machines are a whole other beast, which I’l address in a future post) for HP Server Automation:
I firmly believe this should be put on local storage – ideally a pair of mirror RAIDed 73GB drives (these could be SSDs, to accelerate boot time, but otherwise the OS, per se, should not be being “used” much. You could easily get away with 36GB drives, but since drives are cheap, using slightly more than you “need” is fine.
Again, following the Red Hat guidelines (plus my patented
fudge growth factor), ideally I want swap to be on either a pair of 36GB or a pair of 73GB drives – not RAIDed (neither striping, nor mirroring swap makes a great deal of sense). Yes, this means you should create a pair of swap partitions and present the whole shebang to the OS.
Maybe this is a little paranoid, but I like to have at least 30GB for log space (
/var/log). I view logs to be absolutely vital in the monitoring and troubleshooting arenas, so don’t chintz here!
HPSA has four main space hogs, so I’ll talk about them as subheadings.
It is important that the database has plenty of space – start at 200GB (if possible), and present it as a logically-managed volume group, preferably made up of one or more [growable] LUNs from a SAN.
Note: Thin-provisioning is a perfectly-acceptable approach to use, by the way (thin provisioning present space as “available” but not yet “allocated” from the storage device to the server).
The application really doesn’t grow that much over time (patches and upgrades do cause growth, but they are pretty well-defined).
Since this is the case, carve 50-60GB and present it as a [growable] LUN via LVM to the OS.
Depending on data retention policies, number of distinct OS flavors you need to deploy, and a few other factors, it is a Good Idea™ to allocate a minimum of 40GB for holding OS media (raw-copied content from vendor-supplied installation ISOs). RHEL takes about 3.5GB per copy, and Windows 2008 R2 takes about the same. Whether this space is presented as an NFS share from a NAS, or as a [growable] LUN under an LVM group from a SAN isn’t vitally-important, but having enough space most certainly is.
This is truly the biggest wildcard of them all – how many distinct packages do you plan to deploy? How big are they? How many versions needs to be kept? How many target OSes will you be managing?
I prefer to start with 50GB available to the Library. But I also expect that usage to grow rapidly once the system is in live use – Software Libraries exceeding 300GB are not uncommon in my field. As with the OS Media discussion, it isn’t vitally-important whether this space is allocated from a NAS or a SAN, but it definitely needs to be growable!
Closing comments (on HPSA and storage)
If separate storage options are not available for the big hogs of SA, allocating one, big LVM volume (made up of LUNs and/or DAS volumes), and then relying on bind mounts is a great solution (and avoids the issue of needing to worry about any given chunk of the tool exceeding its bounds too badly – especially if other parts aren’t being as heavily-used as might have been anticipated).
*Yes, yes, I know – once you hit a certain size, the presentation and content layers should be split into separate systems. For purposes of this example, I’m leaving it all together.
Today we’ll look at redundancy and bundling/clustering of storage as a start of a robust storage approach. Before I go any further, please note I am not a “storage admin” – I have a pretty broad exposure to the basic technologies and techniques, but the specifics of individual appliances, vendors, etc is beyond my purview 🙂
One of the oldest robustifiers is RAID – a Redundant Array of Inexpensive Disks. The basic theory is that you have a series of identical disks, and data is mirrored and/or striped across them to both accelerate writes and reads, and to provide a small level data safety: if one drive in a mirrored set dies, a replacement can be swapped-in, and no data will be lost (though performance will be degraded as the array is rebuilt).
Another robustifier is logical volume management. Under Linux, the tool is called LVM2. A logical volume manager collects varied partitions (whether from LUNs or DAS, standalone or RAID), and presents them as a unified partition (or “volume”) to the OS. One of the benefits of LVM is that as new storage is added, the volumes can be expanded transparently to the system – that also means that any applications that are running on the system can be “fixed” (if, for example, they are running low on space) without the need to stop the application, shut off the server, add storage, start the server, figure out where to mount the new storage, and then start the application again.
Robustifiers have as their main goal to make system more reliable, reduce downtime, and overall make a system more performant. As such, they are one part of an overall strategy to improve application/system responsiveness and reliability.
Next time we’ll look at how to figure out how to balance types of storage, specifically in the context of a couple applications with which I am familiar.