Tag Archives: automating

automating mysql backups

I want to backup all of the MySQL databases on my server on a routine basis.

As I started asking how to get a list of all databases in MySQL on Stack Overflow, I came across this previous SO question, entitled, “Drop All Databases in MySQL” (the best answer for which, in turn, republished the kernel from this blog post). Thinking that sounded promising, I opened it and found this little gem:

mysql -uroot -ppassword -e "show databases" | grep -v Database | grep -v mysql| grep -v information_schema| |gawk '{print "drop database " $1 ";select sleep(0.1);"}' | mysql -uroot -ppassword

That will drop all databases. No doubt about it. But that’s not what I want to so, so I edited the leading command down to this:

`mysql -uroot -e "show databases" | grep -v Database | grep -v mysql| grep -v information_schema| grep -v test | grep -v OLD | grep -v performance_schema

Which gives back a list of all the databases created by a user.

Now I need a place to keep the dumps .. /tmp sounded good.

And each database should be in its own file, for I need mysqldump $db.identifier.extension

Made the ‘identifier’ the output of date +%s to get seconds since the Unix epoch (which is plenty unique enough for me).

All of which adds up to this one-liner:

for db in `mysql -uroot -e "show databases" | grep -v Database | grep -v mysql| grep -v information_schema| grep -v test | grep -v OLD | grep -v performance_schema`; do mysqldump $db > /tmp/$db.dump.`date +%s`.sql; done

Plop that puppy in root’s crontab on a good schedule for you, and you have a hand-free method to backup databases.

Thought about using xargs, but I couldn’t come up with a quick/easy way to uniquely identify each file in the corresponding output.

Might consider adding some compression and/or a better place for dumps to live and/or cleaning-up ‘old’ ones (however you want to determine that), but it’s a healthy start.

You can also do mysqldump --all-databases if you think you want to restore all of them simultaneously … I like the idea of individually dumping them for individual restoration / migration / etc.

The full script I am using (which does include backups, etc):



echo 'Archiving old database backups'

tar zcf mysql-dbs.`date +%s`.tar.gz ~/sqlbackups
rm -f ~/sqlbackups/*


echo 'Backing up MySQL / MariaDB databases'

for db in `mysql -uroot -e "show databases" | grep -v Database | grep -v mysql| grep -v information_schema| grep -v test | grep -v OLD | grep -v performance_schema`; do mysqldump $db > ~/sqlbackups/$db.dump.`date +%s`.sql; done

echo 'Done with backups. Files can be found in ~/sqlbackups'


automation {gp}

The way people moved up the ladder in IT during my early days (starting in 1975) was to take on new projects that allowed them time to master the new software and become the local expert. As you became the local expert on many new software products, management became very comfortable giving you more and more of these new software projects. Now, at this critical time you needed to train your replacement for some of the software that you had become the local expert because you could not maintain forward momentum with the tremendous drag caused by being the local expert for so many software offerings. Of course, you would get management to agree to lighten your workload because your current workload didn’t allow you to work on their next “Pet” project (after all you are now their go to guy). Also, you would push to give up the software that was either too time consuming and/or not part of your future automation plans. If you didn’t give up something (holding onto knowledge is your power trip), you would fail to meet management’s expectations and someone else would get the new projects and your plans would stall (cut off from new software knowledge and isolated).

When you start to automate, you automate your existing manual processes usually by using a wrapper because it is the fastest way of showing progress. However, after demonstrating that you can automate longstanding processes, you now have the evidence to convince management to allow you to automate your next project (beginning to end) because you will become the local expert not only on the new software but also on the automation software. After successfully completing the automation beginning to end, you are in a position to push for the policy that states that all new software/applications will be automated as part of its installation/configuration steps. Now, you push for automation friendly software/applications because wrappers will no longer be acceptable automation options.

The philosophy of automation document was compiled during IBM user group meetings from around 1987 through 1989 when automation was a very hot topic.

The Philosophy of Automation

Automation is not a technical problem it is a people problem.

When you initially automate, you convert your process flow documents into executable code that consistently runs on a prearranged schedule, or through a monitor, or an error message. These executable processes enforce your process rules every time.

You cannot automate processes that are all over the place.

When you automate your processes they will be transformed.

Because you automate your processes automation never ends

Automate as close to the source as possible.

Problems with automated processes occur infrequently but are more difficult to solve than manual processes.

The combining of problem, change, and asset management with automated process management and root cause analysis, improves quality and allows you to consistently meet your service levels.

All automation code is throwaway code.

Automation is very exciting

Automation is very rewarding.

Everyone is on the automation team

Guest Post from one of my colleagues, Dave B

automating or automation?

I have been working in the realm of “automation” – specifically data center automation – for several years.

Merriam-Webster defines “automating” thusly:

  1. to operate by automation
  2. to convert to largely automatic operation <automate a process>

Notice the subtle difference with M-W’s definition of “automation“:

  1. the technique of making an apparatus, a process, or a system operate automatically
  2. the state of being operated automatically
  3. automatically controlled operation of an apparatus, process, or system by mechanical or electronic devices that take the place of human labor

These words tend to be used interchangeably – but they are different. Most of the customers I have worked with think they are “doing automation”, when in actuality they are only barely starting to “automate”.

What do I mean by this?

Most customers that bring automation tools in-house (whether simple cron jobs or complex tools like HPSA) take their current, manual processes and merely write wrappers around them to make them “automatic”. That is the first step of automation – but it’s only the first step.

Too many people try to take new tools and make them fit their current processes, procedures, and policies – rather than seeing what policies, procedures, and processes are either made redundant by the new tools, or can be improved, shortened, or – wait for it – automated!

Think of a physical example – you’re a Shaker Cabinetmaker in the late 1800s. You’re making end tables. Cutting dovetails with a dovetail saw. Sanding with a block and sandpaper. Cutting pieces to size by hand. Drilling mortises and cutting tenons with a manual auger and small saw. Lathing on a treadle-powered lathe.

Jump forward 100 years. You’re WGBH in Boston wanting to come up with a how-to program to air weekly. You find a fellow named Norm Abram, and pay him to do the show. You put New Yankee Workshop on TV for years. Norm visits places like Hancock Shaker Village for inspiration. But Norm doesn’t use loads of hand tools – he uses radial arm saws, drill presses, table saws, routers, joiners, etc. Why? Because he wants to make a prototype or three in a few days. And then wants to film a show in two (or occasionally longer for big projects) and be able to give you the confidence, along with a set of measured drawings, that you could, more-or-less, replicate what he did in your own home.

Which approach is better? Neither – they both yield end tables. Which approach is more repeatable – especially by someone with little experience? That’d be Norm’s method: the automated method. But the New Yankee Workshop is only slightly down the road of automating the entire process.

Automation (in the furniture world, at least) is found at Ikea. Thousands of identical Ingos and Karls roll out the door every year. Produced by automated machinery.

The job of the Information Technologist has a great deal of art to it – but it’s also a science: at the core of everything that a computer does is logic (perhaps poor logic, but logic nonetheless). Everyone in information technology (especially, though it’s applicable to myriad other industries) should be striving to make themselves replaceable – because no one is irreplaceable. I’ve seen it come true in scores of settings: the person who makes themselves “irreplaceable” never gets promoted, and is eventually replaced by someone else: either management removes him or finds a way around him, or he leaves the organization.

Therefore, preemptively make yourself replaceable. This was Ken Moellman’s campaign when he ran for State Treasurer in KY: to eliminate the very job he was running for.

There’s a secret to making yourself replaceable – and that is that when you can show that you’re replaceable, especially in the world I work in, you tend to be promoted. You also tend to grow because you’re learning more. Because you learn more and grow, you become more valuable. Because you become more valuable, you can move up the chain as you like.

Do artisan works still have a market? Of course – go look at any “artsy” type store that showcases “local craftsmen” in almost any part of the country: they’re offering their hard work for your consideration … at a hard price.

Is the artisan chair fundamentally any better than the Ikea chair? Maybe, maybe not. It sure looks better. But it’s not as repeatable.

And repeatable tasks get automated because if you have to do it twice, you need to write it down. And if you have to do it more than twice, you need a process anyone can follow. But processes are always open for refinement and replacement. The process to get a piece of lumber from a tree is not completely dissimilar to day from how we did it before the advent of the sawmill – but with laser-guided blades, the sawmill of 2013 can optimize lumber out of a log in a way very very few people ever could .. and can cut the log into its constituent boards faster than any person.

The process for manually provisioning a RHEL server is pretty simple – but shortly after introducing Anaconda, Red Hat introduced the kickstart (modeled after jumpstart). Microsoft, likewise, has unattend.txt and unattend.xml (for either the DOS or WinPE methods of installing). And SuSE, HPUX, and AIX have their systems (AutoYaST, Ignite, and NIM).

Why do these tools exist? It’s so administrators can rapidly deploy machines without having to do a lot of extra setup work by hand. The same can be asked for why do chainsaws, table saws, and circular saws exist? It’s so you can cut wood more rapidly and more repeatably than by hand. You can fell a tree with an axe. You can fell a tree with a saw. But for a single person, felling a tree with a chainsaw is best. Or you can use a Tiger Cat.

The first step of automating needs to be building wrappers to reliably repeat manual processes.

The second, and far more important, step in the paradigm shift from manual methods to automation, is to systematically go through all of your processes, procedures, and policies and see what can be refined, what can be replaced, and, most importantly, what can be removed.

What legacy activities are you doing that should be eliminated, updated, or cleaned-up?