Category Archives: technical


Last week, for the better part of 4.5 days, this site was offline.

Along with, of course, every other domain hosted hereon .

Here’s the timeline of my actions

  • Tuesday, reboot to update kernel revs
    • system did not come back online
  • over the next several days, tried all kinds of diagnostic attempts, including
    • verified host was pingable, tracerouteable, etc
    • rescue environments to chroot and remove out of date packages, update boot menus, etc
    • remote KVM (which is Java based, and wouldn’t run on my macOS Sierra machine with Java 8 U121)
  • late Friday (or maybe it was Saturday), received a cron-generated email – which meant the server was up
    • had a bolt of inspiration, and thought to check the firewall (but couldn’t for several hours for various reasons)
  • Saturday evening, using a rescue environment from my hosting provider, chroot’ed into my server, and reset firewalld
    • reboot, and bingo bango! server was back

So. What happened? Short version, something enabled firewalld, and setup basic rules to block everything. And I do mean everything – ssh, http, smtp, etc etc.

Not sure exactly how the firewall rules got mucked-up, but that was the fix.


apple tv – how apple can beat amazon and google

In e99 of Exponent, Ben Thompson makes a compelling case for his idea that Amazon Echo (Alexa) is an operating system – and that Amazon has beaten Apple (with Siri) and Google Home (with Assistant) at the very game they both try to play.

And I think he’s onto the start of something (he goes on to elaborate a bit in his note that Apple TV turned 10 this week (along with the little thing most people have never heard of, iPhone)).

But he’s only on the *start* of something. See, Apple TV is cheaper than Amazon Echo – by $30 for the entry model (it’s $20 more for the model with more storage). Echo Dot is cheaper, but also is less interesting (imo). And Alexa doesn’t have any local storage (that I know of).

And neither of them will stream video.

By Apple TV has something going for it – it *already* has Siri enabled. In other words, it has the home assistant features many people want, and does video and audio streaming to boot.

It handles live TV via apps like DIRECTV or Sling. And Netflix and other options for streaming (including, of course, iTunes).

Oh, and it handles AirPlay, so you can plop whatever’s on your iPhone, iMac, etc onto your TV (like a Chromecast).

But Apple doesn’t seem to focus on any of that. They have a device which, by all rights, ought to be at least equal (and probably superior to) with its competition – but they seem to think their competition is Roku or the Fire Stick. From a pricing perspective, those are the wrong folks to be considering your competition.

It’s Google and Amazon Apple should have in its sights – because Apple TV *ought* to beat the ever living pants of both Home and Echo.

If HomeKit exists on Apple TV, and you have Siri on Apple TV, why is it not the center of home automation?

results from running pi-hole for several weeks

I came across pi-hole recently – an ad blocker and DNS service that you can run on a Raspberry Pi in Raspian (or any Debian or Ubuntu (ie Debian-like)) system. Using pi-hole should obviate the need for running ad-blockers in your browser (so long as you’re on a network that is running DNS queries through pi-hole).

I’ve seen some people running it on CentOS – but I’ve had issues with that combination, so am keeping to the .deb-based distros (specifically, I’m running it on the smallest droplet size from Digital Ocean with Ubuntu 16.04).

First the good – it is truly stupidly-simple to get setup and running. A little too simple – not because tools should have to be hard to use, but because there’s not much configuration that goes in the automated script. Also, updating the blacklist and whitelist are easy – though they don’t always update via the web portal as you’d hope.

Second, configuration is almost all manual: so, if you want to use more than 2 upstream DNS hosts (I personally want to hit both Google and Freenom upstream), for example, there is manual file editing. Or if you want to have basic auth enabled for the web portal, you need to not only add it manually, but you need to re-add it manually after any updates.

Third, the bad. This is not a pi-hole issue, per se, but it is still relevant: most devices that you would configure to use DNS for your home (or maybe even enterprise) want at least two entries (eg your cable modem, or home wifi router). You can set only one DNS provider with some devices, but not all. Which goes towards showing how pi-hole might not be best run outside your network – if you run piggy-back DHCP and DNS both off your RPi, and not off the wireless router you’re probably running, then you’re OK. But if your wireless router / cable modem demands multiple DNS entries, you either need to run multiple pi-hole servers somewhere, or you need to realize not everything will end up going through the hole.

Pi-hole sets up lighttpd instance (which you don’t have to use) so you can see a pretty admin panel:


I added basic authentication to the admin subdirectory by adding the following lines to /etc/lighttpd/lighttpd.conf after following this tutorial:

#add http basic auth
auth.backend = "htdigest"
auth.backend.htdigest.userfile = "/etc/lighttpd/.htpasswd/lighttpd-htdigest.user"
auth.require = ("/admin" =>
( "method" => "digest",
"realm" => "rerss",
"require" => "valid-user" )

I also have 4 upstream DNS providers in /etc/dnsmasq.d/01-pihole.conf:


I still need to SSLify the page, but that’s coming.

The 8.8.* addresses are Google’s public DNS. The 80.80.* addresses are Freenom’s. There are myriad more free DNS providers out there – these are just the ones I use.

So what’s my tl;dr on pi-hole? It’s pretty good. It needs a little work to get it more stable between updates – but it’s very close. And I bet if I understood a little more of the setup process, I could probably make a fix to the update script that wouldn’t clobber (or would restore) any custom settings I have in place.

watch your mtu size in openstack

For a variety of reasons related to package versions and support contracts, I was unable to use the Red Hat built KVM image of RHEL 7.2 for a recent project. (The saga of that is worthy of its own post – and maybe I’ll write it at some point. But not today.)

First thing I tried was to build an OpenStack instance off of the RHEL 7.2 media ISO directly – but that didn’t work.

So I built a small VM on another KVM host – with virt-viewer, mirt-manager, etc – got it setup and ready to go, then went through the process of converting the qcow image to raw, and plopping it into the OpenStack image inventory.

Then I deployed the two VMs I need for my project (complete with additional disk space, yada yada yada). So far, so good.

Floating IP assigned to the app server, proper network for both, static configs updated. Life is good.

Except I cannot ssh out from the newly-minted servers anywhere. Or if it will ssh out, it’s super laggy.

I could ssh-in, but not out. I could scp out (to some locales, but not others), but was not getting nearly the transfer rates I should have been seeing. Pings worked just fine. So did nslookup.

After a couple hours of fruitless searching, got a hold of one of my coworkers who setup our OpenStack environment: maybe he’d know something.

We spent another about half hour on the phone, when he said, “hey – what’s your MTU set to?” “I dunno – whatever’s default, I guess. “Try setting it to 1450.”

Why 1450? What’s wrong with the default of 1500? Theoretically, the whole reason defaults are, well, default, is that they should “just work”. In other words, they might not be optimal for your situation, but they should be more-or-less optimalish for most situations.

Unless you’re in a basically-vanilla “layered networking” environment (apologies if “layered networking” is the wrong term, it’s the one my coworker used, and it made sense to me – networking isn’t really my forte). Fortunately, my colleague had seen an almost-identical problem several months ago playing with Docker containers. The maximum transmission unit is the cap on the network packet size, which is important to set in a TCP/IP environment – otherwise devices on the network won’t know how much data they can see at once from each other.

1500 bytes is the default for most systems, as I mentioned before, but when you have a container / virtual machine / etc hosted on a parent system whose MTU is set to 1500, the guest cannot have as large an MTU because then the host cannot attach whatever extra routing bits it needs to identify which guest gets what data when it comes back. For small network requests, such as ping uses, you’re nowhere near the MTU, so they work without a hitch.

For larger requests, you can (and will) start running into headspace issues – so either the guest MTU needs to shrink, or the host needs to grow.

Growing the host’s MTU isn’t a great option in a live environment – because it could disrupt running guests. So shrinking the guest MTU needs to be done instead.

Hopefully this helps somebody else.

Now you know, and knowing is half the battle.