antipaucity

fighting the lack of good ideas

update: keeping your let’s encrypt certs up-to-date

Last year I posted a simple script for keeping your Let’s Encrypt SSL certificates current.

In conjunction with my last post sharing the “best” SSL configs you can use with Apache on CentOS, here is the current state of the cron’d renewal script I use.

systemctl stop httpd.service
systemctl stop postfix
~/letsencrypt/letsencrypt-auto -t -n --agree-tos --keep --expand --standalone certonly --rsa-key-size 4096 -m user@domain.tld -d domain.tld
# you can append more [sub]domains to a single cert with additional `-d` directives ([-d otherdomain.tld [-d sub.domain.tld...]])
#...repeat for every domain / domain group
systemctl start httpd.service
systemctl start postfix

I have this script running @weekly in cron. You should be able to get away with doing it only every month or two .. but I like to err on the side of caution.

I’m stopping and starting Postfix in addition to httpd (Apache on my system) for only two reasons: first, I am using some of the LE-issued certs in conjunction with my Postfix install; second, because I don’t know if Dovecot and my webmail system need to make sure Postfix is restarted if underlying certs change.

results from running pi-hole for several weeks

I came across pi-hole recently – an ad blocker and DNS service that you can run on a Raspberry Pi in Raspian (or any Debian or Ubuntu (ie Debian-like)) system. Using pi-hole should obviate the need for running ad-blockers in your browser (so long as you’re on a network that is running DNS queries through pi-hole).

I’ve seen some people running it on CentOS – but I’ve had issues with that combination, so am keeping to the .deb-based distros (specifically, I’m running it on the smallest droplet size from Digital Ocean with Ubuntu 16.04).

First the good – it is truly stupidly-simple to get setup and running. A little too simple – not because tools should have to be hard to use, but because there’s not much configuration that goes in the automated script. Also, updating the blacklist and whitelist are easy – though they don’t always update via the web portal as you’d hope.

Second, configuration is almost all manual: so, if you want to use more than 2 upstream DNS hosts (I personally want to hit both Google and Freenom upstream), for example, there is manual file editing. Or if you want to have basic auth enabled for the web portal, you need to not only add it manually, but you need to re-add it manually after any updates.

Third, the bad. This is not a pi-hole issue, per se, but it is still relevant: most devices that you would configure to use DNS for your home (or maybe even enterprise) want at least two entries (eg your cable modem, or home wifi router). You can set only one DNS provider with some devices, but not all. Which goes towards showing how pi-hole might not be best run outside your network – if you run piggy-back DHCP and DNS both off your RPi, and not off the wireless router you’re probably running, then you’re OK. But if your wireless router / cable modem demands multiple DNS entries, you either need to run multiple pi-hole servers somewhere, or you need to realize not everything will end up going through the hole.

Pi-hole sets up lighttpd instance (which you don’t have to use) so you can see a pretty admin panel:

pihole

I added basic authentication to the admin subdirectory by adding the following lines to /etc/lighttpd/lighttpd.conf after following this tutorial:

#add http basic auth
auth.backend = "htdigest"
auth.backend.htdigest.userfile = "/etc/lighttpd/.htpasswd/lighttpd-htdigest.user"
auth.require = ("/admin" =>
( "method" => "digest",
"realm" => "rerss",
"require" => "valid-user" )
)

I also have 4 upstream DNS providers in /etc/dnsmasq.d/01-pihole.conf:

server=80.80.80.80
server=8.8.8.8
server=8.8.4.4
server=80.80.81.81

I still need to SSLify the page, but that’s coming.

The 8.8.* addresses are Google’s public DNS. The 80.80.* addresses are Freenom’s. There are myriad more free DNS providers out there – these are just the ones I use.

So what’s my tl;dr on pi-hole? It’s pretty good. It needs a little work to get it more stable between updates – but it’s very close. And I bet if I understood a little more of the setup process, I could probably make a fix to the update script that wouldn’t clobber (or would restore) any custom settings I have in place.

improve your entropy pool in linux

A few years ago, I ran into a known issue with one of the products I use that manifests when the Red Hat Linux server it’s running on has a low entropy pool. And, as highlighted in that question, the steps I found 5 years ago didn’t work for me (turns out modifying the t parameter from ‘1’ to ‘.1’ did work (rngd -r /dev/urandom -o /dev/random -f -t .1), but I digress (and it’s no longer correct in CentOS 7 (the ‘t’ option, that is))).

In playing around with the Mozilla-provided SSL configurator, I noticed a line in the example SSL config that referenced “truerand”. After a little Googling, I found an opensource implementation called “twuewand“.

And a little more Googling about adding entropy, and I came across this interesting tutorial from Digital Ocean for “haveged” (which, interestingly-enough, allowed me to answer a 6-month-old question on Server Fault about CloudLinux).

Haveged “is an attempt to provide an easy-to-use, unpredictable random number generator based upon an adaptation of the HAVEGE algorithm. Haveged was created to remedy low-entropy conditions in the Linux random device that can occur under some workloads, especially on headless servers.”

And twuewand “is software that creates hardware-generated random data. It accomplishes this by exploiting the fact that the CPU clock and the RTC (real-time clock) are physically separate, and that time and work are not linked.”

For workloads that require lots of entropy (generating SSL keys, SSH keys, PGP keys, and pretty much anything else that wants lots of random (or strong pseudorandom) seeding), the very real problem of running out of entropy (especially on headless boxes or virtual machines) is something you can face quite easily / frequently.

Enter solutions like OpenRNG which are hardware entropy generators (that one is a USB dongle (see also this skh-tec post)). Those are awesome – unless you’re running in cloud space somewhere, or even just a “traditional” virtual machine.

One of the funny things about getting “random” data is that it’s actually very very hard to get. It’s easy to describe, but generating “truly” random data is incredibly difficult. (If you want to have an aneurysm (or you’re like me and think this stuff is unendingly fascinating), go read the Wikipedia entry on “Cryptographically Secure Pseudo Random Number Generator“.)

If you’re in a situation, though, like I was (and still am), where you need to maintain a relatively high quantity of fairly decent entropy (probably close to CSPRNG level), use haveged. And run twuewand occasionally – at the very least when starting Apache (at least if you’re running HTTPS – which you should be, since it’s so easy now).

on ads

My colleague Sheila wrote a great, short piece on LinkedIn about ads recently.

And this is what I commented:

I held off for years in installing ad blockers/reducers.

But I have finally had to cave – been running Flash in “ask-only” mode for months now, and just added a couple blocker/reducer extensions to Chrome recently (in addition to the ones on my iPhone for Safari).

I like supporting a site as much as the next guy (I even run a few highly unobtrusive ones on my sites) – but I agree: when I cann’t tell whether it’s your content or an ad, or even get through all the popovers, splashes, etc, I’m leaving and not coming back

I hate the idea of ad blockers/reducers. But it is coming to such a point where you can’t read much of what is on the web because of the inundation of ads.

And mailing list offers. Oh my goodness the mailing list offers. Sadly, the only way to block those seems to be to disable javascript … which then also breaks lots of sites I need it to work on – and whitelisting becomes problematic with something like javascript, since it’s usefully ubiquitous (in addition to being uselessly ubiquitous).

For Safari on iOS 9, I have three blocker/reducer apps installed (they’re free, too: AdBlock Pro, AdBlock Plus, & Refine (App Store links)). It’d be nice if they worked for Firefox, Opera Mini, and Chrome, too – but alas they do not (yet).

Also run two blocking/reducing extensions in Chrome (my primary web browser) on my desktop – Adblock Plus & AdBlock).

Shame the web has come to this. Schneier’s written about it recently. As has Brad Jones & Phil Barrett.

Wired and Forbes even go so far as to tell you you’re running an ad blocker and ask to be whitelisted or pay a subscription.

Forbes’ message:

Hi again. Looks like you’re still using an ad blocker. Please turn it off in order to continue into Forbes’ ad-light experience.

And from Wired:

Here’s The Thing With Ad Blockers
We get it: Ads aren’t what you’re here for. But ads help us keep the lights on.
So, add us to your ad blocker’s whitelist or pay $1 per week for an ad-free version of WIRED. Either way, you are supporting our journalism. We’d really appreciate it.

If you’re detecting my adblocker, maybe instead of telling me you won’t do anything until I whitelist you (or subscribe), you think about the problem with ads first.

Just a thought.

putting owncloud 8 on a subdomain instead of a subdirectory on centos 7

After moving to a new server, I wanted to finally get ownCloud up and running (over SSL, of course) on it.

And I like subdomains for different services, so I wanted to put it at sub.domain.tld. This turns out to be not as straight-forward as one might otherwise hope, sadly – ownCloud expects to be installed to domain.tld/owncloud (and plops itself into /var/www/owncloud by default (or sometimes /var/www/html/owncloud).

My server is running CentOS 7, Apache 2.4, and MariaDB (a drop-in replacement for MySQL). This overview is going to presume you’re running the same configuration – feel free to spin one up quickly at Digital Ocean to try this yourself.

Start with the ownCloud installation instructions, which will point you to the openSUSE build service page, where you’ll follow the steps to add the ownCloud community repo to your yum repo list, and install ownCloud. (In my last how-to, 8.0 was current – 8.2 rolled-out since I installed 8.1 a couple days ago.)

Here is where you need to go “off the reservation” to get it ready to actually install.

Add a VirtualHost directive to redirect http://sub.domain.tld to https://sub.domain.tld (cipher suite list compiled thusly):


<VirtualHost *:80>
ServerName sub.domain.tld
Redirect permanent / https://sub.domain.tld/
</VirtualHost>

Configure an SSL VirtualHost directive to listen for sub.domain.tld:


<VirtualHost *:443>
SSLCertificateFile /etc/letsencrypt/live/sub.domain.tld/cert.pem
SSLCertificateKeyFile /etc/letsencrypt/live/sub.domain.tld/privkey.pem
SSLCertificateChainFile /etc/letsencrypt/live/sub.domain.tld/fullchain.pem
DocumentRoot /var/www/subdomain
ServerName sub.domain.tld
ErrorLog logs/subdomain-error_log
CustomLog logs/subdomain-access_log "%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b"
ServerAdmin user@domain.tld
SSLEngine on
SSLProtocol all -SSLv2 -SSLv3
SSLCipherSuite ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
SSLHonorCipherOrder on
SSLOptions +StdEnvVars
<Directory "/var/www/cgi-bin">
SSLOptions +StdEnvVars
SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown downgrade-1.0 force-response-1.0
# allow .htaccess to change things
<Directory "/var/www/subdomain">
Options All +Indexes +FollowSymLinks
AllowOverride All
Order allow,deny
Allow from all
</Directory>
</VirtualHost>

Comment-out every line in (or remove) /etc/httpd/conf.d/owncloud.conf.

Move /var/www/html/owncloud/* to /var/www/subdomain.

Make sure permissions are correct on /var/www/subdomain:

  • chown -R :apache /var/www/subdomain

Run the command-line installer: /var/www/subdomain/occ maintenance:install

Fix ownership of the config file, /var/www/subdomain/config/config.php to root:apache.

In config.php,

  • change trusted domains from ‘localhost‘ to ‘sub.domain.tld
  • make sure ‘datadirectory‘ is equal to /var/www/subdomain/data
  • change ‘overwrite.cli.url‘ from ‘localhost‘ to ‘https://sub.domain.tld

Navigate to http://sub.domain.tld, and follow the prompts – and you should be a happy camper.

above the cloud storage

Who wants to go into business with me?

I’ve got a super-cool storage company idea.

Load up a metric buttload of cubesats with radiation-hardened SSD storage, solar power, and [relatively] simple communicaton stacks (secured by SSH or SSL, of course), and launch them into orbit.

You think cloud storage is cool? What about above-the-cloud storage?

Pros:

  • avoid national jurisdictional rules, since the data will never be housed “in” a specific country
  • very hard to attack physically
  • great reason to use IPv6 addressing

Cons:

  • expensive to get the initial devices into orbit
  • software maintenance on the system could be annoying
  • need to continually plop more cubesats into orbit to handle both expanded data needs and loss of existing devices due to orbital degradation

Who’s with me?

dave winer is wrong

Or maybe he’s right. But for the wrong reason.

Over on Medium, which is where I saw his post, Dave said:

“The problem of requiring HTTPs in less than 140 chars: 1.Few benefits for blog-like sites, and 2. The costs are prohibitive.

There’s actually a #3 (sorry) — 3. For sites where the owner is gone the costs are more than prohibitive. There’s no one to do the work.”

While this was more-or-less true-ish in times gone by, with the advent of truly-free SSL (and not merely the manual free edition you could get from StartSSL) from Let’s Encrypt (see my how-to), automated, hands-off maintenance of your SSL-iness is possible (and encouraged).

There are, potentially, good reasons for saying SSL won’t be required. But blaming costs, upkeep, and “few benefits” are not among them. If anything, SSL-ifying your blog will help with some (not all) attacks launched against self-hosted/-managed services where login data can be otherwise captured in plaintext.

Dave, I like you. But you’re wrong on this one.