antipaucity

fighting the lack of good ideas

automating mysql backups

I want to backup all of the MySQL databases on my server on a routine basis.

As I started asking how to get a list of all databases in MySQL on Stack Overflow, I came across this previous SO question, entitled, “Drop All Databases in MySQL” (the best answer for which, in turn, republished the kernel from this blog post). Thinking that sounded promising, I opened it and found this little gem:

mysql -uroot -ppassword -e "show databases" | grep -v Database | grep -v mysql| grep -v information_schema| |gawk '{print "drop database " $1 ";select sleep(0.1);"}' | mysql -uroot -ppassword

That will drop all databases. No doubt about it. But that’s not what I want to so, so I edited the leading command down to this:

`mysql -uroot -e "show databases" | grep -v Database | grep -v mysql| grep -v information_schema| grep -v test | grep -v OLD | grep -v performance_schema

Which gives back a list of all the databases created by a user.

Now I need a place to keep the dumps .. /tmp sounded good.

And each database should be in its own file, for I need mysqldump $db.identifier.extension

Made the ‘identifier’ the output of date +%s to get seconds since the Unix epoch (which is plenty unique enough for me).

All of which adds up to this one-liner:

for db in `mysql -uroot -e "show databases" | grep -v Database | grep -v mysql| grep -v information_schema| grep -v test | grep -v OLD | grep -v performance_schema`; do mysqldump $db > /tmp/$db.dump.`date +%s`.sql; done

Plop that puppy in root’s crontab on a good schedule for you, and you have a hand-free method to backup databases.

Thought about using xargs, but I couldn’t come up with a quick/easy way to uniquely identify each file in the corresponding output.

Might consider adding some compression and/or a better place for dumps to live and/or cleaning-up ‘old’ ones (however you want to determine that), but it’s a healthy start.


You can also do mysqldump --all-databases if you think you want to restore all of them simultaneously … I like the idea of individually dumping them for individual restoration / migration / etc.

The full script I am using (which does include backups, etc):

############################
#!/bin/bash

date

echo 'Archiving old database backups'

tar zcf mysql-dbs.`date +%s`.tar.gz ~/sqlbackups
rm -f ~/sqlbackups/*

date

echo 'Backing up MySQL / MariaDB databases'

for db in `mysql -uroot -e "show databases" | grep -v Database | grep -v mysql| grep -v information_schema| grep -v test | grep -v OLD | grep -v performance_schema`; do mysqldump $db > ~/sqlbackups/$db.dump.`date +%s`.sql; done

echo 'Done with backups. Files can be found in ~/sqlbackups'

date

establishing a data haven cloud

In Neal Stephenson’s seminal book, Cryptonomicon, he describes the creation of a “data haven” in the fictional Sultanate of Kinakuta.

Why has no-one started building such a service (or, at least not in a public way) on existing cloud services (eg AWS or Rackspace) and/or create their own global network?

Data backup and replication is not “difficult” – and neither is the concept of distributed (and replicated) storage (LeftHand Networks was doing RAID-over-LAN a while before HP bought them).

So – why is this not available as a service to which you can subscribe (or use anonymously)? Incorporating in a ‘friendly’ country, offering anonymized connections (fully encrypted, etc), and giving a client that works a la Dropbox or Box.com.

There should be lots of companies who would love to offer a service like this – it should be fairly lucrative, and pretty easy to setup.