fighting the lack of good ideas

remembering to forget

As a society, we have forgotten how to forget. We are addicted to storing everything forever. Why?

New Atlas had an article recently on the demise of skyscrapers in favor of new ones which starts off,

The Great Pyramid of Giza has stood at a height of around 460 feet for 4,500 years, but these days we are ripping down tall structures without even batting an eyelid. A new study looking at the average lifespan of demolished skyscrapers illustrates just how quick we are to pull the trigger, raising the question of how we could reimagine tower design so that they last centuries rather than decades.

I ask, first: why should we design things to “last centuries rather than decades”?

Yes, the future impact of decisions made today must be carefully evaluated (“concrete cannot be recycled, and most of the tallest buildings in the world use concrete for their main structural system”).

But designing for “centuries” is not the answer.

Or, at least, it’s not the answer.

It’s not a panacea – though there may be some occasional use cases for expecting a structure to last generationally.

But since time immemorial, buildings have mostly been built with at least an unconscious knowledge they would not exist “forever”.

Sure, there are interesting historical sites (such as these now-destroyed Mayan ruins) that we might have liked to keep. But reuse of old materials is part and parcel of civilizational progress.

fallocate vs dd for swap file creation

I recently ran across this helpful Digital Ocean community answer about creating a swap file at droplet creation time.

So I decided to test how long using my old method (using dd) takes to run vs using fallocate.

Here’s how long it takes to run fallocate on a fresh 40GB droplet:

root@ubuntu:/# rm swapfile && time fallocate -l 1G /swapfile
real	0m0.003s
user	0m0.000s
sys	0m0.000s

root@ubuntu:/# rm swapfile && time fallocate -l 2G /swapfile
real	0m0.004s
user	0m0.000s
sys	0m0.000s

root@ubuntu:/# rm swapfile && time fallocate -l 4G /swapfile
real	0m0.006s
user	0m0.000s
sys	0m0.004s

root@ubuntu:/# rm swapfile && time fallocate -l 8G /swapfile
real	0m0.007s
user	0m0.000s
sys	0m0.004s

root@ubuntu:/# rm swapfile && time fallocate -l 16G /swapfile
real	0m0.012s
user	0m0.000s
sys	0m0.008s

root@ubuntu:/# rm swapfile && time fallocate -l 32G /swapfile
real	0m0.029s
user	0m0.000s
sys	0m0.020s

Interestingly, the relationship of size to time is non-linear when running fallocate.

Compare to building a 4GB swap file with dd (on the same server, it turned out using either a 16KB or 4KB bs gives the fastest run time):

time dd if=/dev/zero of=/swapfile bs=16384 count=262144 

262144+0 records in
262144+0 records out
4294967296 bytes (4.3 GB, 4.0 GiB) copied, 4.52602 s, 949 MB/s

real	0m4.528s
user	0m0.048s
sys	0m4.072s

Yes, you read that correctly – using dd with an “optimum” bs of 16KB (after much testing different bs settings) takes ~1000x as long as using fallocate to create the same “size” file!

How is fallocate so much faster? The details are in the man pages for it (emphasis added):

fallocate is used to manipulate the allocated disk space for a file, either to deallocate or preallocate it. For filesystems which support the fallocate system call, preallocation is done quickly by allocating blocks and marking them as uninitialized, requiring no IO to the data blocks. This is much faster than creating a file by filling it with zeroes.

dd will “always” work. fallocate will work almostall of the time … but if you happen to be using a filesystem which doesn’t support it, you need to know how to use dd.

But: if your filesystem supports fallocate (and it probably does), it is orders of magnitude more efficient to use it for file creation.

putting owncloud 8 on a subdomain instead of a subdirectory on centos 7

After moving to a new server, I wanted to finally get ownCloud up and running (over SSL, of course) on it.

And I like subdomains for different services, so I wanted to put it at sub.domain.tld. This turns out to be not as straight-forward as one might otherwise hope, sadly – ownCloud expects to be installed to domain.tld/owncloud (and plops itself into /var/www/owncloud by default (or sometimes /var/www/html/owncloud).

My server is running CentOS 7, Apache 2.4, and MariaDB (a drop-in replacement for MySQL). This overview is going to presume you’re running the same configuration – feel free to spin one up quickly at Digital Ocean to try this yourself.

Start with the ownCloud installation instructions, which will point you to the openSUSE build service page, where you’ll follow the steps to add the ownCloud community repo to your yum repo list, and install ownCloud. (In my last how-to, 8.0 was current – 8.2 rolled-out since I installed 8.1 a couple days ago.)

Here is where you need to go “off the reservation” to get it ready to actually install.

Add a VirtualHost directive to redirect http://sub.domain.tld to https://sub.domain.tld (cipher suite list compiled thusly):

<VirtualHost *:80>
ServerName sub.domain.tld
Redirect permanent / https://sub.domain.tld/

Configure an SSL VirtualHost directive to listen for sub.domain.tld:

<VirtualHost *:443>
SSLCertificateFile /etc/letsencrypt/live/sub.domain.tld/cert.pem
SSLCertificateKeyFile /etc/letsencrypt/live/sub.domain.tld/privkey.pem
SSLCertificateChainFile /etc/letsencrypt/live/sub.domain.tld/fullchain.pem
DocumentRoot /var/www/subdomain
ServerName sub.domain.tld
ErrorLog logs/subdomain-error_log
CustomLog logs/subdomain-access_log "%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b"
ServerAdmin user@domain.tld
SSLEngine on
SSLProtocol all -SSLv2 -SSLv3
SSLHonorCipherOrder on
SSLOptions +StdEnvVars
<Directory "/var/www/cgi-bin">
SSLOptions +StdEnvVars
SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown downgrade-1.0 force-response-1.0
# allow .htaccess to change things
<Directory "/var/www/subdomain">
Options All +Indexes +FollowSymLinks
AllowOverride All
Order allow,deny
Allow from all

Comment-out every line in (or remove) /etc/httpd/conf.d/owncloud.conf.

Move /var/www/html/owncloud/* to /var/www/subdomain.

Make sure permissions are correct on /var/www/subdomain:

  • chown -R :apache /var/www/subdomain

Run the command-line installer: /var/www/subdomain/occ maintenance:install

Fix ownership of the config file, /var/www/subdomain/config/config.php to root:apache.

In config.php,

  • change trusted domains from ‘localhost‘ to ‘sub.domain.tld
  • make sure ‘datadirectory‘ is equal to /var/www/subdomain/data
  • change ‘overwrite.cli.url‘ from ‘localhost‘ to ‘https://sub.domain.tld

Navigate to http://sub.domain.tld, and follow the prompts – and you should be a happy camper.

above the cloud storage

Who wants to go into business with me?

I’ve got a super-cool storage company idea.

Load up a metric buttload of cubesats with radiation-hardened SSD storage, solar power, and [relatively] simple communicaton stacks (secured by SSH or SSL, of course), and launch them into orbit.

You think cloud storage is cool? What about above-the-cloud storage?


  • avoid national jurisdictional rules, since the data will never be housed “in” a specific country
  • very hard to attack physically
  • great reason to use IPv6 addressing


  • expensive to get the initial devices into orbit
  • software maintenance on the system could be annoying
  • need to continually plop more cubesats into orbit to handle both expanded data needs and loss of existing devices due to orbital degradation

Who’s with me?

dell buys emc

So I missed predicting anything like this one.

If you’ve been under a rock, like apparently I was last week, you’ve missed out on hearing Dell is purchasing EMC. For $67 billion. With a “B”.

This seems to be taking lots of people by surprise, but it makes perfect sense: Dell is already a huge supplier of servers into not only the SMB market, but also enterprise and cloud providers. EMC needs to find ways to keep their expensive storage relevant, especially in an era of storage proliferation, do-it-yourself options that are more than merely good enough, and less and less need for “dedicated” storage (though you still need flash in the underlying arrays, contrary to what Todd Mace thinks).

Thin provisioning, on-demand storage expansion and contraction (ok, ok – so the “contraction” part is not common), separation of duties via *aaS architectures, and more has been pushing EMC not so much to a bit or bench player, but into a corner of making it harder and harder to justify their pricing.

Silver Lake & Michael Dell obviously see the benefit of doing what some have claimed as the biggest merger in tech history (the Compaq-HP debacle was ~$25 billion back in 2001; AOL-TimeWarner was ~$106 billion, but not a pure tech merger). But the benefit is not the synergy of storage and servers.

Nor is it the management software, services groups, great corporate management, or anything of the kind.

The benefit will be in having a completely vertically-integrated and holistic offering because EMC is the majority owner of VMware.

That is why Dell et al wanted EMC. And why they’re willing to pay $67 billion in cash, stock, debt, etc to get it.

This move perfectly pivots Dell, already maneuvering away from “just” servers into a major competitor in the cloud space – especially the enterprise cloud space.

HP and IBM have their own storage and server offerings (IBM’s x86 offerings are all Lenovo now since they sold them off, but whatever) – but they don’t have the virtualization platform to bring it about in a soup-to-nuts way. Of course, HP and IBM will happily put VMware onto servers they sell you (IBM will also happily sell you non-x86 gear with their pSeries and zSeries stuff, but those are discussions for another day).

HP Helion and IBM Bluemix are interesting. But not as interesting, in my opinion, as Amazon’s AWS, OpenStack, and other offerings from !HP and !IBM.

Oracle is really the only main competition to the hybrid Dell-EMC company which will emerge, via their acquisition of Sun a few years ago (which is also a whole other conversation).

It’ll be interesting to see how the future HPE will try to compete against future Dell.

on-demand, secure, distributed storage – one step closer

In follow-up to a post from 2013, and earlier this year, I’ve been working on a pointy-clicky deployable MooseFS+ownCloud atop encrypted file systems environment you can rent/buy as a service from my company.

I’ve also – potentially – kicked-off a new project from Bitnami to add MooseFS to their apps list.

owncloud vs pydio – more diy cloud storage

Last week I wrote a how-to on using Pydio as a front-end to a MooseFS distributed data storage cluster.

The big complaint I had while writing that was that I wanted to use ownCloud, but it doesn’t Just Work™ on CentOS 6*.

After finishing the tutorial, I decided to do some more digging – because ownCloud looks cool. And because it bugged me that it didn’t work on CentOS 6.

What I found is that ownCloud 8 doesn’t work on CentOS 6 (at least not easily).

The simple install guide and process really is about version 8, and the last one that can be speedy-installed is 7. And as everyone knows, major version releases often make major changes in how they work. This appears to be very much the case with ownCloud going from 7 to 8.

In fact, the two pages needed for installing ownCloud are so easy to follow, I see no reason to copy them here. It’s literally three shell commands followed by a web wizard. It’s almost too easy.

You need to have MySQL/MariaDB installed and ready to accept connections (or use SQLite) – make a database, user, and give the user perms on the db. And you need Apache installed and running (along with PHP – but yum will manage that for you).

If you’re going to use MooseFS (or any other similar tool) for your storage backend to ownCloud, be sure, too, to bind mount your MFS mount point back to the ownCloud data directory (by default it’s /var/www/html/owncloud/data). Note: you could start by using local storage for ownCloud, and only migrate to a distributed setup later.

Pros of Pydio

  • very little futzing needed to make it work with CentOS 6
  • very clean user management
  • very clean webui
  • light system requirements (doesn’t even require a database)

Pros of ownCloud

  • apps available for major mobile platforms (iOS, Android), desktop)
  • no futzing needed to work with CentOS 7
  • very clean user management
  • clean webui

Cons of Pydio

  • no interface except the webui

Cons of ownCloud

  • needs a database
  • heavier system requirements
  • doesn’t like CentOS 6

What about other cloud environments like Seafile? I like Seafile, too. Have it running, in fact. Would recommend it – though I think there are better options now than it (including ownCloud & Pydio).

*Why do I keep harping on the CentOS 6 vs 7 support / ease-of-use? Because CentOS / RHEL 7 is different from previous releases. I covered that it was different for the Blue Grass Linux User Group a few months ago. Yeah, I know I should be embracing the New Way™ of doing things – but like most people, I can be a technical curmudgeon (especially humorous when you consider I work in a field that is about not being curmudgeonly).

Guess this means I really need to dive into the new means of doing things (mostly the differences in how services are managed) – fortunately, the Fedora Project put together this handy cheatsheet. And Digital Ocean has a clew of tutorials on basic sysadmin things – one I used for this comparison was here.