antipaucity

fighting the lack of good ideas

the gold standard

This is going to ramble a bit, and I’m not 100% sure my opinions are even remotely reasonable, but I had a great conversation on the Gold Standard recently, and thought sharing that would be fun. The quoted sections are relevant parts of the conversation from my friend*, and the unquoted segments are my responses.

I’ve seen pundits, or as I call them, “blowhards”, on both sides of the aisle claim that even suggesting we return to the gold standard is madness. Maybe it’s just because I’m not an economist, but I don’t understand that - we had like 3-4 thousand years of experience with the gold standard; we have less than a century with floating currencies. Why is it so crazy to say “we keep getting into trouble this way, maybe we should fall back until we’ve got this figured out better”?

The only issue I can see with returning to the gold standard (or the silver or any other), is that since gold is by nature a finite resource (whereas many others, while finite, are growable (eg crops, industry, etc)), there would be no reason to have an “exchange” between different countries, and that it would make an effective universal currency – to some extent, undervaluing the currency systems of every country that chose to use the standard, and globalize (even more) our economies.

Plus, it makes issuing loans (and receiving them) substantially more difficult – and loans are NOT always a Bad Thingâ„¢.

For example, if, say, Canada and the US use the gold standard and decide that $1000 US is one ounce of gold, and $500 Canadian is one ounce of gold, the exchange “rate” is fixed – and it’s fixed to something that is increasable, but only at a fairly fixed rate (how fast you can acquire/generate gold). Whereas if you have floating currencies with no “real” backing, exchange rates can change based on the relative health of each country.

By setting a fixed exchange (which is what the gold standard would do … like what China has been doing for years to the US, but only on paper), is that it can cyclically under- and over-value individual countries currencies and economies, ultimately bringing more down at once when a few fail (or, of course the reverse – bring more up when some few major ones succeed).

During the early part of the last century, we were still mostly working with gold, and we had both the Great Depression, and some of the greatest economic growth ever seen.

The Great Depression, imho (though, admittedly, a biased, and fairly-underinformed opinion), was about 90% perception, and 10% reality – a fairly commonplace occurrence in economics, but one that was exacerbated by the [initially] fixed relations between the various national currencies

Also, imho, basing currency on a fixed standard (like gold) was truly only viable in an era of poor communication (I’d personally argue that shortly after the telegram became more than a technological marvel, this became more and more true until it was universal) - with poor communication, perceptions take a LONG time to be transferred – which correspondingly means that “news” was A) old, and B) taken with larger grains of salt than we *tend* to take it with now, since comunication is [effectively] instantaneous. {I have no research or citations to this point – yet: it is currently only my opinion.}

In my opinion, by floating currencies against each other and the relative strengths of each country involved, crashes are slowed (not eliminated, of course). Of course, again, the reverse is also true – booms are flattened-out. So, I’d view the floating-currency approach as one that will *tend* to flatten local (and global) booms and busts into substantially smaller ripples, rather than major mountains and troughs.

Not being an economist, I’m not sure I understand why that should be. What’s the evidence for that? How can we be certain that’s a good thing? Maybe the cycle of mountains and valleys is important, socially?

[not being an economist either,] I’d *think* that it would be better to keep the mountains and valleys more stable / less high|deep, so that slowdowns aren’t felt by a disproportionately small community/niche of the economy, and so that speed-ups can have a “good neighbor” effect to bringing some of the underperforming sectors ‘along for the ride’.

As to question 2 – I don’t *know* that it’s “a good thing” … but I also can’t say it’s a ‘bad thing’, either.

I’d be happy to hear arguments that either oppose mine, or are different 🙂

I think my strongest one would be that we used the gold standard for like 4000 years, and it seemed to work very well for us nearly globally during that time. As far as any possible good neighbor effect from flattening things out, it may be that social upheaval is an important component of progress: almost all major advances thus far in history have had some component of socio-economic shift; flattening those upheavals could very well have consequences that we can’t foresee (I realize this is a weak, speculative argument, but it’s worth considering).

There’s also part of me that feels like floating currencies are so much handwaving and voodoo. You talked about how they allow countries to create exchanges that are tied to their relative health, instead of some fixed point - but isn’t how much currency they’ve arbitrarily decided to create often used as one of the measures of health? If so, that’s somewhat circular logic: if floating currencies let us control how much inflation we’ve got, and inflation is one of the health metrics, then what prevents nations from trying to hide economic problems, just the way that nearly every EU member has done over the last 15 years?

Isn’t it exactly the fact that their currencies were floating relative to one another before hand that allowed them to hide so much of their debt before entering the euro zone? Or have i misunderstood that?

Prior to the “euro zone” cluster****, country finances were a LOT less open/transparent, too … kinda like a private vs public company (not that public companies are as transparent as would be helpful, but the comparison stands).

I feel like if they’d been the gold standard, the euro zone negotiations would have been more along the lines of:

“ok, so how much gold have you got?”
“oh, quite a bit!”
“ok, well, we’re going to need to count all of it, so that we can figure out how many euros to give you.”
“oh, well, we have quite a bit of gold!”
“yeah, still need to count it.”
“oh. um. crap. …fine”

It’s a lot harder to hide how much actual wealth you have, when your wealth can be reduced to a physical object, instead of just assertions on paper.

However, if you’re going to count the gold nuggets (or whatever), it’s also trivially-simple to elect to *NOT* show your whole hand … which would seem to me to be the same economic sleight-of-hand as can be done when asked “how many barrels of oil do you have?” and you reply “we’re recovering 5million bbl/day”

That’s not an answer – it’s interesting information … but not an answer.

In this particular case, there’s no benefit I can immediately think of to hiding your wealth – the whole point was to get your finances into a good enough shape to be able to qualify for the euro zone. The shenanigans were more about hiding debt than hiding wealth.

I think that hiding wealth *could* be beneficial to make yourself *appear* weaker than you are, so that when you “need|want” to be “strong”, you can be. It could also be from a detail-vs-gestalt approach.

So now I ask all of you – is this plausible/reasonable/right? Or am I smoking some serious space crack?


*He gave me permission to quote him as appropriate if desired

storage strategies – part 4

Last time I talked about storage robustifiers.

In the context of a couple applications with which I am familiar, I want to discuss ways to approach balancing storage types and allocations.

Storage Overview

Core requirements of any modern server, from a storage standpoint, are the following:

  • RAM
  • swap
  • Base OS storage
  • OS/application log storage
  • Application storage

Of course, many more elements could be added – but these are the ones I am going to consider today. From a cost perspective, hardware is almost always the least expensive part of an application deployment – licensing, development, maintenance, and other non-“physical” costs will typically far outweigh the expenses of hardware.

RAM

RAM is cheap, comparatively. Any modern OS will lap-up as much memory as is given to it, so I always try to err on the side of generous.

swap

After having found an instance where Swap Really Mattered™, I always follow the Red Hat guidelines which state that swap space should be equal to installed RAM plus 2 gigabytes. For example, if 16GB of RAM is installed, swap should at least equal 18GB. Whether swap should be on a physical disk or in a logical volume is up for debate, but do not chintz on swap! It is vital to the healthy operation of almost all modern systems!

Base OS

This will vary on a per-platform basis, but a common rule-of-thumb is that Linux needs about 10GB for itself. Windows 2008 R2 requests a minimum of 20GB, but 40GB is substantially better.

OS/application logs

Here is another wild variable - though most applications have pretty predictable log storage requirements. For example, HP’s Server Automation (HPSA) tool will rarely exceed 10GB in total log file usage. Some things, like Apache, may have varying log files depending on how busy a website is.

Application

Lastly, and definitely most importantly, is the discussion surrounding the actual space needed for an application to be installed and run. On two ends of the spectrum, I will use Apache and HPSA for my examples.

The Apache application only requires a few dozen megabytes to install and run. The content served-up by Apache can, of course, be a big variable – a simple, static website might only use a few dozen megabytes. Whereas a big, complex website (like StackOverflow) might be using a few hundred gigabytes (in total with any dynamically-generated content which might be in a database, or similar)*.

The best way to address varying storage needs, in my opinion, is to use a robustifying tool like LVM – take storage presented to the server, and conglomerate it into a single mount point (perhaps /var/www/html) so that as content needs change, it can be grown transparently to the website.

Likewise, with HPSA, there are several base storage needs – for Oracle, initial software content, the application itself, etc. Picking-up on a previous post on bind mounts, I think it a Very Good Thingâ„¢ to present a mass of storage to a single initial mount point, like /apps, and then put several subdirectories in place to hold the “actual” application. Storage usage for the “variables” of HPSA – Software Repository, OS Media, Model Repository (database) – is very hard to predict, but a base guideline is that you need 100GB in total to start.

Choosing Storage Types

My recommendations

This is how I like to approach storage allocation on a physical server (virtual machines are a whole other beast, which I’l address in a future post) for HP Server Automation:

Base OS

I firmly believe this should be put on local storage – ideally a pair of mirror RAIDed 73GB drives (these could be SSDs, to accelerate boot time, but otherwise the OS, per se, should not be being “used” much. You could easily get away with 36GB drives, but since drives are cheap, using slightly more than you “need” is fine.

swap

Again, following the Red Hat guidelines (plus my patented fudge growth factor), ideally I want swap to be on either a pair of 36GB or a pair of 73GB drives – not RAIDed (neither striping, nor mirroring swap makes a great deal of sense). Yes, this means you should create a pair of swap partitions and present the whole shebang to the OS.

OS/application logs

Maybe this is a little paranoid, but I like to have at least 30GB for log space (/var/log). I view logs to be absolutely vital in the monitoring and troubleshooting arenas, so don’t chintz here!

Application

HPSA has four main space hogs, so I’ll talk about them as subheadings.

Oracle

It is important that the database has plenty of space – start at 200GB (if possible), and present it as a logically-managed volume group, preferably made up of one or more [growable] LUNs from a SAN.

Note: Thin-provisioning is a perfectly-acceptable approach to use, by the way (thin provisioning present space as “available” but not yet “allocated” from the storage device to the server).

Core application

The application really doesn’t grow that much over time (patches and upgrades do cause growth, but they are pretty well-defined).

Since this is the case, carve 50-60GB and present it as a [growable] LUN via LVM to the OS.

OS Media

Depending on data retention policies, number of distinct OS flavors you need to deploy, and a few other factors, it is a Good Ideaâ„¢ to allocate a minimum of 40GB for holding OS media (raw-copied content from vendor-supplied installation ISOs). RHEL takes about 3.5GB per copy, and Windows 2008 R2 takes about the same. Whether this space is presented as an NFS share from a NAS, or as a [growable] LUN under an LVM group from a SAN isn’t vitally-important, but having enough space most certainly is.

Software Library

This is truly the biggest wildcard of them all – how many distinct packages do you plan to deploy? How big are they? How many versions needs to be kept? How many target OSes will you be managing?

I prefer to start with 50GB available to the Library. But I also expect that usage to grow rapidly once the system is in live use – Software Libraries exceeding 300GB are not uncommon in my field. As with the OS Media discussion, it isn’t vitally-important whether this space is allocated from a NAS or a SAN, but it definitely needs to be growable!

Closing comments (on HPSA and storage)

If separate storage options are not available for the big hogs of SA, allocating one, big LVM volume (made up of LUNs and/or DAS volumes), and then relying on bind mounts is a great solution (and avoids the issue of needing to worry about any given chunk of the tool exceeding its bounds too badly – especially if other parts aren’t being as heavily-used as might have been anticipated).


*Yes, yes, I know – once you hit a certain size, the presentation and content layers should be split into separate systems. For purposes of this example, I’m leaving it all together.

storage strategies – part 3

In part 2, I introduced SAN/NAS devices, and in part 1, I looked as the more basic storage type, DAS.

Today we’ll look at redundancy and bundling/clustering of storage as a start of a robust storage approach. Before I go any further, please note I am not a “storage admin” – I have a pretty broad exposure to the basic technologies and techniques, but the specifics of individual appliances, vendors, etc is beyond my purview 🙂

One of the oldest robustifiers is RAID – a Redundant Array of Inexpensive Disks. The basic theory is that you have a series of identical disks, and data is mirrored and/or striped across them to both accelerate writes and reads, and to provide a small level data safety: if one drive in a mirrored set dies, a replacement can be swapped-in, and no data will be lost (though performance will be degraded as the array is rebuilt).

Another robustifier is logical volume management. Under Linux, the tool is called LVM2. A logical volume manager collects varied partitions (whether from LUNs or DAS, standalone or RAID), and presents them as a unified partition (or “volume”) to the OS. One of the benefits of LVM is that as new storage is added, the volumes can be expanded transparently to the system – that also means that any applications that are running on the system can be “fixed” (if, for example, they are running low on space) without the need to stop the application, shut off the server, add storage, start the server, figure out where to mount the new storage, and then start the application again.

Robustifiers have as their main goal to make system more reliable, reduce downtime, and overall make a system more performant. As such, they are one part of an overall strategy to improve application/system responsiveness and reliability.

Next time we’ll look at how to figure out how to balance types of storage, specifically in the context of a couple applications with which I am familiar.