Category Archives: technical

results from running pi-hole for several weeks

I came across pi-hole recently – an ad blocker and DNS service that you can run on a Raspberry Pi in Raspian (or any Debian or Ubuntu (ie Debian-like)) system. Using pi-hole should obviate the need for running ad-blockers in your browser (so long as you’re on a network that is running DNS queries through pi-hole).

I’ve seen some people running it on CentOS – but I’ve had issues with that combination, so am keeping to the .deb-based distros (specifically, I’m running it on the smallest droplet size from Digital Ocean with Ubuntu 16.04).

First the good – it is truly stupidly-simple to get setup and running. A little too simple – not because tools should have to be hard to use, but because there’s not much configuration that goes in the automated script. Also, updating the blacklist and whitelist are easy – though they don’t always update via the web portal as you’d hope.

Second, configuration is almost all manual: so, if you want to use more than 2 upstream DNS hosts (I personally want to hit both Google and Freenom upstream), for example, there is manual file editing. Or if you want to have basic auth enabled for the web portal, you need to not only add it manually, but you need to re-add it manually after any updates.

Third, the bad. This is not a pi-hole issue, per se, but it is still relevant: most devices that you would configure to use DNS for your home (or maybe even enterprise) want at least two entries (eg your cable modem, or home wifi router). You can set only one DNS provider with some devices, but not all. Which goes towards showing how pi-hole might not be best run outside your network – if you run piggy-back DHCP and DNS both off your RPi, and not off the wireless router you’re probably running, then you’re OK. But if your wireless router / cable modem demands multiple DNS entries, you either need to run multiple pi-hole servers somewhere, or you need to realize not everything will end up going through the hole.

Pi-hole sets up lighttpd instance (which you don’t have to use) so you can see a pretty admin panel:

pihole

I added basic authentication to the admin subdirectory by adding the following lines to /etc/lighttpd/lighttpd.conf after following this tutorial:

#add http basic auth
auth.backend = "htdigest"
auth.backend.htdigest.userfile = "/etc/lighttpd/.htpasswd/lighttpd-htdigest.user"
auth.require = ("/admin" =>
( "method" => "digest",
"realm" => "rerss",
"require" => "valid-user" )
)

I also have 4 upstream DNS providers in /etc/dnsmasq.d/01-pihole.conf:

server=80.80.80.80
server=8.8.8.8
server=8.8.4.4
server=80.80.81.81

I still need to SSLify the page, but that’s coming.

The 8.8.* addresses are Google’s public DNS. The 80.80.* addresses are Freenom’s. There are myriad more free DNS providers out there – these are just the ones I use.

So what’s my tl;dr on pi-hole? It’s pretty good. It needs a little work to get it more stable between updates – but it’s very close. And I bet if I understood a little more of the setup process, I could probably make a fix to the update script that wouldn’t clobber (or would restore) any custom settings I have in place.

watch your mtu size in openstack

For a variety of reasons related to package versions and support contracts, I was unable to use the Red Hat built KVM image of RHEL 7.2 for a recent project. (The saga of that is worthy of its own post – and maybe I’ll write it at some point. But not today.)

First thing I tried was to build an OpenStack instance off of the RHEL 7.2 media ISO directly – but that didn’t work.

So I built a small VM on another KVM host – with virt-viewer, mirt-manager, etc – got it setup and ready to go, then went through the process of converting the qcow image to raw, and plopping it into the OpenStack image inventory.

Then I deployed the two VMs I need for my project (complete with additional disk space, yada yada yada). So far, so good.

Floating IP assigned to the app server, proper network for both, static configs updated. Life is good.

Except I cannot ssh out from the newly-minted servers anywhere. Or if it will ssh out, it’s super laggy.

I could ssh-in, but not out. I could scp out (to some locales, but not others), but was not getting nearly the transfer rates I should have been seeing. Pings worked just fine. So did nslookup.

After a couple hours of fruitless searching, got a hold of one of my coworkers who setup our OpenStack environment: maybe he’d know something.

We spent another about half hour on the phone, when he said, “hey – what’s your MTU set to?” “I dunno – whatever’s default, I guess. “Try setting it to 1450.”

Why 1450? What’s wrong with the default of 1500? Theoretically, the whole reason defaults are, well, default, is that they should “just work”. In other words, they might not be optimal for your situation, but they should be more-or-less optimalish for most situations.

Unless you’re in a basically-vanilla “layered networking” environment (apologies if “layered networking” is the wrong term, it’s the one my coworker used, and it made sense to me – networking isn’t really my forte). Fortunately, my colleague had seen an almost-identical problem several months ago playing with Docker containers. The maximum transmission unit is the cap on the network packet size, which is important to set in a TCP/IP environment – otherwise devices on the network won’t know how much data they can see at once from each other.

1500 bytes is the default for most systems, as I mentioned before, but when you have a container / virtual machine / etc hosted on a parent system whose MTU is set to 1500, the guest cannot have as large an MTU because then the host cannot attach whatever extra routing bits it needs to identify which guest gets what data when it comes back. For small network requests, such as ping uses, you’re nowhere near the MTU, so they work without a hitch.

For larger requests, you can (and will) start running into headspace issues – so either the guest MTU needs to shrink, or the host needs to grow.

Growing the host’s MTU isn’t a great option in a live environment – because it could disrupt running guests. So shrinking the guest MTU needs to be done instead.

Hopefully this helps somebody else.

Now you know, and knowing is half the battle.

automating mysql backups

I want to backup all of the MySQL databases on my server on a routine basis.

As I started asking how to get a list of all databases in MySQL on Stack Overflow, I came across this previous SO question, entitled, “Drop All Databases in MySQL” (the best answer for which, in turn, republished the kernel from this blog post). Thinking that sounded promising, I opened it and found this little gem:

mysql -uroot -ppassword -e "show databases" | grep -v Database | grep -v mysql| grep -v information_schema| |gawk '{print "drop database " $1 ";select sleep(0.1);"}' | mysql -uroot -ppassword

That will drop all databases. No doubt about it. But that’s not what I want to so, so I edited the leading command down to this:

`mysql -uroot -e "show databases" | grep -v Database | grep -v mysql| grep -v information_schema| grep -v test | grep -v OLD | grep -v performance_schema

Which gives back a list of all the databases created by a user.

Now I need a place to keep the dumps .. /tmp sounded good.

And each database should be in its own file, for I need mysqldump $db.identifier.extension

Made the ‘identifier’ the output of date +%s to get seconds since the Unix epoch (which is plenty unique enough for me).

All of which adds up to this one-liner:

for db in `mysql -uroot -e "show databases" | grep -v Database | grep -v mysql| grep -v information_schema| grep -v test | grep -v OLD | grep -v performance_schema`; do mysqldump $db > /tmp/$db.dump.`date +%s`.sql; done

Plop that puppy in root’s crontab on a good schedule for you, and you have a hand-free method to backup databases.

Thought about using xargs, but I couldn’t come up with a quick/easy way to uniquely identify each file in the corresponding output.

Might consider adding some compression and/or a better place for dumps to live and/or cleaning-up ‘old’ ones (however you want to determine that), but it’s a healthy start.


You can also do mysqldump --all-databases if you think you want to restore all of them simultaneously … I like the idea of individually dumping them for individual restoration / migration / etc.

The full script I am using (which does include backups, etc):

############################
#!/bin/bash

date

echo 'Archiving old database backups'

tar zcf mysql-dbs.`date +%s`.tar.gz ~/sqlbackups
rm -f ~/sqlbackups/*

date

echo 'Backing up MySQL / MariaDB databases'

for db in `mysql -uroot -e "show databases" | grep -v Database | grep -v mysql| grep -v information_schema| grep -v test | grep -v OLD | grep -v performance_schema`; do mysqldump $db > ~/sqlbackups/$db.dump.`date +%s`.sql; done

echo 'Done with backups. Files can be found in ~/sqlbackups'

date

change your default font in windows 10

Starting from a tutorial I found recently, I want to share how to change your default font in Windows 10 – but in a shorter edition than that long one (and in, I think, a less-confusing way).

Back in the Good Ole Days™, you could easily change system font preferences by right-clicking on your desktop, and going into the themes and personalization tab to set whatever you wanted however you wanted (this is also where you could turn off (or back on) icons on your desktop (like My Documents), set window border widths, colors for everything, etc).

Windows 10 doesn’t let you do that through any form of Control Panel anymore, so you need to break-out the Registry Editor*.

0th, Start regedit

WindowsKey-R brings up the Run dialog – type regedit to start the Registry Editor

2016-07-27 (3)

NOTE: you should back-up any keys you plan to edit, just in case you forget what you did, want to revert, or make a mistake.

1st, Navigate to the right key area
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\FontSubstitutes

2016-07-27&&

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Fonts

2016-07-27 (1)Are where you’ll need to be to make these changes.

2nd, Blank entries for Segoe UI

For all of the “Segoe UI” entries in Fonts, change their Data field to blank (“”)

3rd, Add a Segoe UI substitute font

In FontSubstitutes, click Edit->String Value. Name it “Segoe UI” (without the quotes). In the “Value data” field, enter your preferred font name. I used Lucida Console.

2016-07-27 (2)

4th, Logout, or reboot, and login again to see your changes take effect.

* You can also download my registry keys, which have the substitution already done here. And you can pick any other font instead of Lucida Console you like – just edit the key file in your favorite text editor (I like TextPad) before merging into your Registry.

there is no such object on the server

Gee. Thanks, Active Directory.

This is one of the more useless error messages you can get when trying to programmatically access AD.

Feel free to Google (or DuckDuckGo, or Bing, or whomever) that error message. Go ahead, I’ll wait.

Your eyes bleeding, and gray matter leaking from your ears yet? No? Then you obviously didn’t do what I just told you to – go search the error message, I’ll be here when you get back.

Background for how I found this particular gem: I have a customer (same one I was working with on SAP a while back where I had BAPI problems) that is trying to automate Active Directory user provisioning with HP Operations Orchestration. As a part of this, of course, I need to verify I can connect to their AD environment, OUs are reachable, etc etc.

In this scenario, I’m provisioning users into a custom OU (ie not merely Users).

Provisioning into Users doesn’t give this error – only in the custom OU. Which is weird. So we tried making sure there was already a user in the OU, in case the error was being kicked-back by having and empty OU (if an OU is empty, does it truly exist?).

That didn’t help.

Finally, after several hours of beard-stroking, diving into deep AD docs, MSDN articles, HP forae, and more … customer’s AD admin says, “hey – how long is the password you’re trying to use; and does it meet 3-of-4?” I reply, “it’s ‘Password!’ – 3-of-4, 9 characters long”. “Make it 14 characters long – for kicks.”

Lo and behold! There is a security policy on that OU that mandates a minimum password length as well as complexity – but that’s not even close to what AD was sending back as an error message. “There is no such object on the server”, as the end result of a failed user create, is 100% useless – all it tells you is the user isn’t there. It doesn’t say anything about why it isn’t there.

Sigh.

Yet another example of [nearly] completely ineffective error messages.

AD should give you something that resembles a why for the what – not merely the ‘what’.

Something like, “object could not be created; security policy violation” – while not 100% of the answer – would put you a lot closer to solving an issue than just “there is no such object on the server”.

Get it together, developers! When other people cannot understand your error messages, regardless of how “smart” they are, what field they work in, etc, you are Doing It Wrong™.