fighting the lack of good ideas

a-frame coopettes for raising chicks

We raise chickens.

For the last few years, we’ve only had layers – and they’ve all been full-grown by the time they arrived at our home.

This year, we decided to buy some chicks because our layers are starting to age-out of being able to lay, and we’re interested in trying our hand at raising a few birds for butchering ourselves.

Since you need to wait to add new birds to your flock until the birds are 6+ weeks old, we need a place for them to grow (they were ~8 days old when I bought them).

Here are some pictures of the first collapsible coopette for your viewing pleasure – after which I’ll describe how I put these things together ?

The first one (shown above) was the initial implementation of my idea…in which we decided hinging the access door on the top is less than ideal, and we discovered we need 3 hasps to hold the ends on rather than 2.

Materials used:

  • Pressure treated 1x6x8 fence pickets (bought 29 for both coopettes, ended-up with about 3.5 left over – the second coopette is sturdier (and a little prettier)
  • Half-inch opening, 36″ wide hardware cloth (need ~22′ per coopette; ~30′ if you choose to make bottoms (I opted to not make coopette bottoms this time around)
  • Quarter-inch opening, 24″ wide hardware cloth (happened to have a perfectly-sized piece left from another project I could use on the second coopette door)
  • Staples
  • 1 1/4″ ceramic-coated deck screws
  • 2.5″ hinges (5 per coopette … though I wish I’d gone with 3″ hinges instead)
  • 3″ hasps (7 per coopette)

When folded-up, the sides collapse to ~3″ thick. The ends are about 2″ thick, too.

Total space needed against the side of your garage/shed/etc to store the coopette when you aren’t actively using it is ~3′ x 8′ x 6″, or slightly more than a folding table ?

Construction was very simple – I made the sides a smidge over 36″ wide so that I could attach the hardware cloth without trimming for more than length ?

The ends have a pair of 36″ long boards cut into trapezoids with 30° ends, and a butted ~30″ trapezoid, again with 30° ends (see photo for detail). The butt joint is secured via stapled hardware cloth (wrapped around from the outside to the inside (see photo), and a small covering inside screwed into both upright pieces. I used various pieces of scrap for those butt joint covers

Wrapping the hardware cloth around the ends was the single most time-consuming (and painful!) aspects of construction. Start with a 36″x36″ piece, laid-out square to the bottom of the end. Clamp in place (these 3″ spring clamps from Harbor Freight were a true godsend), and staple as desired … I may have gone a little overboard on the stapling front ?. On the second coopette, I relied more on sandwiching a little extra fence picket material to capture the hardware cloth, and a little less on staples.

Lessons Learned

Prototype 1 was quick-and-dirty – too much stapling, shouldn’t have had the door hinge at the top, needed to be more stable (sandwich the hardware cloth better)

And two hasps holding the ends on is not sufficient – you need three (one more-or-less at each corner) to really keep the end locked well, and to enable easy movement

Prototype 2 was not as dirty … but moving from fence pickets to 5/4 would be preferable

Likewise, wish I had put enough support at the bottom to be able to put some casters on at least one end to facilitate moving around the yard (to prevent killing-out the grass underneath)

What would I do differently in the future?

  • Make them longer than 8 feet (if you use 5/4 deck boards, buy the 10, 12, or 16 foot variety)
  • Make the sides slightly higher than 36″ to reduce the need for cutting hardware cloth (a very time-consuming task!)
  • Add wheels to one end for easy movement
  • Plan for an suspended waterer (the gap at the top happened to be wide enough to sling on up using a little rope and a couple carabiners – but it easily could not have been)
  • Hard-roof one end instead of using a tarp … or use a slightly larger tarp that would cover multiple coopettes at once instead of small ones that cover one at a time

determining the ‘legitimacy’/’reliability’ of a domain

I’ve recently been asked by several people to investigate websites (especially e-commerce ones) for reliability/legitimateness.

Thought someone else may find my process useful, and/or have some ideas on how to improve it ?

So here goes:

  1. Pop a terminal window (I’m on a Mac, so I open Terminal – feel free to use your terminal emulator of choice (on Windows, you’ll need to have the Subsystem for Linux or Cygwin installed))
    1. Type whois <domain.tld> | less
    2. Look at all of the following:
      • Creation (Creation Date: 2006-02-22T01:12:10Z)
      • Expiration (Registry Expiry Date: 2023-02-22T01:12:10Z)
      • Name server(s) (NS3.PAIRNIC.COM)
      • Registral URL (
      • Registrar (Pair Domains)
      • Contact info (should [generally] be anonymized in some manner)
    3. Possible flags:
      • If the domain’s under 2 years old, and/or the registration period is less than a year (we can talk about when short registrations may make sense in the comments)
      • If the name servers are “out of the country” (which, of course, will vary based on where you are)
      • If the contact info isn’t anonymized
  2. Load the website in question in a browser (use an ingonito and/or proxied tab, if you like) and review the following types of pages:
    • Contact Us
      • Where are they located?
      • Does the location stated match what you expect based on the whois response?
    • About Us
      • Does it read “naturally” in the language it purports to be written in?
        • Ie, does it sound like a native speaker wrote it, or does it sound stiltedly/mechanically translated?
    • Does it match what is in the whois record and the Contact Us page?
    • Do they provide social media links (Twitter, Facebook, LinkedIn, Instagram, etc)?
      • What do their social media presence(s) say about them?
    • Return/Refund Policy (for ecommerce sites only)
      • What is the return window?
      • How much will be charged to send it back and/or restorck it?
    • Shipping Policy (for ecommerce sites only)
      • How long from submitting an order to when it ships to when it arrives?
      • Where is it shipping from?
    • Privacy Policy (only applies if you may be sharing data with them (ecommerce, creating accounts, etc)
      • What do they claim they will (and will not) do with your private information?
  3. Is the site running over TLS/SSL?
    • You should see a little padlock icon in your browser’s address bar
    • Click that icon, and read what the browser reports about the SSL certificate used
    • Given that running over TLS is 100% free, there is absolutely NO reason for a site to NOT use SSL (double especially if they’re purporting to be an ecommerce site)

Reviewing these items usually takes me about 2-3 minutes.

It’s not foolproof (after all, better fools are invented every day), but it can give you a good overview and relative confidence level in the site in question.

chelsea troy – designing a course

Via the rands-leadership Slack (in the #i-wrote-something channel), I found an article written on that was [the last?] in her series on course design.

While I found part 9 interesting, I was bummed there were no internal links to the other parts of the series (at least to previous parts (even if there may be future parts not linked in a given post)).

To rectify that for my 6 readers, and as a resource for myself, here is a table of contents for her series:
  1. What will students learn?
  2. How will the sessions go?
  3. What will we do in a session?
  4. Teaching methods for remoteness
  5. Why use group work?
  6. Dividing students into groups
  7. Planning collaborative activities
  8. Use of surveys
  9. Iterating on the course
She also has some other related, though not part of the “series”, posts I found interesting:
  1. Learning to teach a course
  2. Planning and surviving a 3-hour lecture
  3. Resources for programming instructors
  4. Syllabus design

If you notice future entries to this series (before I do), please comment below so I can add them 🤓

sshuttle – a simple transparent proxy vpn over ssh

I found out about sshuttle from a random tweet that happened to catch my eye.

Here’s the skinny (from the readme):

  • Your client machine (or router) is Linux, FreeBSD, or MacOS.
  • You have access to a remote network via ssh.
  • You don’t necessarily have admin access on the remote network.
  • The remote network has no VPN, or only stupid/complex VPN protocols (IPsec, PPTP, etc). Or maybe you are the admin and you just got frustrated with the awful state of VPN tools.
  • You don’t want to create an ssh port forward for every single host/port on the remote network.
  • You hate openssh’s port forwarding because it’s randomly slow and/or stupid.
  • You can’t use openssh’s PermitTunnel feature because it’s disabled by default on openssh servers; plus it does TCP-over-TCP, which has terrible performance.

Here’s how I set it up on my Mac

Install homebrew:

/bin/bash -c "$(curl -fsSL"

Install sshuttle (as a regular user):

brew install sshuttle

Test the connection to a server you have:

sudo sshuttle -r <user>@host.tld -x host.tld 0/0 -vv

I also made sure that my target server could be connected-to via certificate for my local root user – but you can use a password if you prefer.

Check your IP address:


Once you make sure the connection works, Ctrl-C to end the session.

Then setup an alias in your shell’s .profile (for me, it’s .bash_profile):

alias vpn='sudo sshuttle -r <user>@domain.tld -x domain.tld 0/0'

Other things you can do

According to the full docs, there are a lot more things you can do with sshuttle – including running it on your router, thereby VPN’ing your whole LAN through an endpoint! You can also run it in server mode.

This is a super useful little utility!

basic dockerized jitsi deployment with an apache reverse proxy on centos

After a friend of mine told me he wanted to deploy Jitsi on my main webserver, and me saying “sure”, I decided I wanted to get it up and running on a new server both so I knew how to do it, and to avoid the latency issues of videoconferencing from central North America to Germany and back.

Before I go into how I got it working, let me say that the official Quick Start guide is good – but it doesn’t cover anything but itself.

Here’s the basic setup:

What To Do:

Once you have your new CentOS instance up and running (I used Vultr), here’s everything you need to install:

yum -y install epel-release && yum -y upgrade && yum -y install httpd docker docker-compose screen bind-utils certbot git haveged net-tools mod_ssl

I also installed a few other things, but that’s because I’m multi-purposing this server for Squid, and other things, too.

Enable Apache, firewalld, & Docker:

systemctl enable httpd && systemctl enable docker && systemctl enable firewalld

Now get your swap space setup:

fallocate -l 4G /swapfile && chmod 0600 /swapfile && mkswap /swapfile && swapon /swapfile

Add the following line to the bottom of your /etc/fstab:

/swapfile swap swap default 0 0

Restart your VPS:

shutdown -r now

Get your cert from Let’s Encrypt (make sure you’ve already setup appropriate CAA & A records for your domain and any subdomains you want to use):

certbot -t -n --agree-tos --keep --expand --standalone certonly --must-staple --rsa-key-size 4096 --preferred-challenges dns-01,http-01 -m <user>@<domain.tld> -d <jitsi.yourdomain.tld>

Create a root crontab entry to run certbot frequently (I do @weekly ~/

Go to the home directory of whatever user you plan to run Jitsi as:

su - <jitsi-user>

Begin the Quick Start directions:

  • git clone && cd docker-jitsi-meet
  • mv env.example .env
  • Change the timezone in .env from Europe/Amsterdam if you want it to show up in a sane timezone (like Etc/UTC)
  • mkdir -p ~/.jitsi-meet-cfg/{web/letsencrypt,transcripts,prosody,jicofo,jvb}
  • docker-compose up -d

Now configure Apache for SSL. Start with this reference I posted.

But in the [sub]domain-specific conf file z-[sub]domain-tld.conf, add proxy and authentication lines (so that only people you allow to use your video conference can actually use it):

ProxyPreserveHost on
ProxyPass / http://localhost:8000/ nocanon
ProxyPassReverse / http://localhost:8000/
ProxyRequests       off
AllowEncodedSlashes NoDecode
<Proxy http://localhost:8000/*>
    Order deny,allow
    Allow from all
    Authtype Basic
    Authname "Password Required"
    AuthUserFile /etc/httpd/.htpasswd
    Require valid-user
RewriteEngine       on
RewriteRule        ^/meetwith/(.*)$ http://%{HTTP_HOST}/$1 [P]
ProxyPassReverseCookiePath /meetwith /

Reload your configs, and make sure they’re happy, fixing any errors that may exist:

apachectl graceful

Setup at least one user who’ll be able to access the site:

htpasswd -B -c /etc/httpd/.htpasswd <user>

You should also configure firewalld to allow only what you want (http, https, ssh):

firewall-cmd --zone=public --add-service=http && firewall-cmd --zone=public --add-service=https && firewall-cmd --zone=public --add-service=ssh

With any luck, when you now navigate to https://[sub.]domain.tld in your web browser, and enter your username and password you created with htpasswd, you’ll get the Jitsi welcome page!

Other Resources:

a fairly comprehensive squid configuration for proxying all the http things

After combing through the docs and several howtos on deploying the Squid proxy server – none of which really did everything I wanted, of course – I’ve finally gotten to the format below.

Installing Squid is easy-peasy – it’s in the standard package repos for the major platforms (CentOS/Fedora/RHEL, Ubuntu/Debian, etc) – so just run yum install squid or apt install squid on your platform of choice (my exact install command on Ubuntu 18.04 was apt -y install squid net-tools apache2-utils).

What I wanted was an “open” (password-protected) proxy server with disk-based caching enabled that would cover all of the ports I could reasonably expect to run into.

Why “open”? Because I want to be able to turn it on and off on various mobile devices which may (or may not) have stable-ish public IPs.

Here is the config as I have it deployed, minus sensitive/site-specific items (usernames, passwords, port, etc), of course:

A working /etc/squid/squid.conf

acl SSL_ports port 443
acl SSL_ports port 8443
acl Safe_ports port 80		# http
acl Safe_ports port 21		# ftp
acl Safe_ports port 443		# https
acl Safe_ports port 1025-65535	# unregistered ports
acl Safe_ports port 280		# http-mgmt
acl Safe_ports port 488		# gss-http
acl Safe_ports port 777		# multiling http
acl Safe_ports port 8080

auth_param basic program /usr/lib/squid/basic_ncsa_auth /etc/squid/.htpasswd
auth_param basic children 15
# after "realm", put some descriptive, clever, or otherwise-identifying string that will appear when you login
auth_param basic realm Insert Incredibly Witty Title Here
auth_param basic credentialsttl 5 hours
acl password proxy_auth REQUIRED
http_access allow password

# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports

# Only allow cachemgr access from localhost
http_access allow localhost manager
http_access deny manager

#http_access allow localnet
http_access allow localhost

# And finally deny all other access to this proxy
# commented-out to allow "open" use (ie password authenticated)
#http_access deny all

# Squid normally listens to port 3128
# change this line if you want it to listen on something other port
http_port 3128

# Uncomment and adjust the following to add a disk cache directory.
#cache_dir ufs /var/spool/squid 100 16 256
# format is      
cache_dir ufs /etc/squid/squid-cache 768 16 256

# Leave coredumps in the first cache dir
coredump_dir /var/spool/squid

# Add any of your own refresh_pattern entries above these.
refresh_pattern ^ftp:		1440	20%	10080
refresh_pattern ^gopher:	1440	0%	1440
refresh_pattern -i (/cgi-bin/|\?) 0	0%	0
refresh_pattern (Release|Packages(.gz)*)$      0       20%     2880
refresh_pattern .		0	20%	4320

via off
forwarded_for off

request_header_access Allow allow all 
request_header_access Authorization allow all 
request_header_access WWW-Authenticate allow all 
request_header_access Proxy-Authorization allow all 
request_header_access Proxy-Authenticate allow all 
request_header_access Cache-Control allow all 
request_header_access Content-Encoding allow all 
request_header_access Content-Length allow all 
request_header_access Content-Type allow all 
request_header_access Date allow all 
request_header_access Expires allow all 
request_header_access Host allow all 
request_header_access If-Modified-Since allow all 
request_header_access Last-Modified allow all 
request_header_access Location allow all 
request_header_access Pragma allow all 
request_header_access Accept allow all 
request_header_access Accept-Charset allow all 
request_header_access Accept-Encoding allow all 
request_header_access Accept-Language allow all 
request_header_access Content-Language allow all 
request_header_access Mime-Version allow all 
request_header_access Retry-After allow all 
request_header_access Title allow all 
request_header_access Connection allow all 
request_header_access Proxy-Connection allow all 
request_header_access User-Agent allow all 
request_header_access Cookie allow all 
request_header_access All deny all

Finalize your Squid server system settings

Things you need to do once you do the above (prepend sudo to each command below if you’re not logged-in as root:

  1. Enable Squid to start at boot: systemctl enable squid
  2. Create the cache directories: squid -z
  3. Create a DNS entry for your proxy host (if you want it usable outside your home network, and don’t want to reference it by IP address only)
  4. Create the authentication file (/etc/squid/.htpasswd in this example): touch /etc/squid/.htpasswd
  5. Create a username and password: htpasswd -c /etc/squid/.htpasswd (don’t forget this username/password combination!)
  6. Start Squid: systemctl start squid

Configure your browser to use your new proxy

Here’s where you need to go and what you need to change in Firefox:

  1. Navigate to about:preferences
  2. Click on Settings… under Network Proxy
  3. Enter your proxy host details:

To verify your proxy settings are correct, visit with both the proxy off, and then again with it on.

If your reported IP address changes between visits (with the second check being your Squid server IP) – congratulations! You have successfully deployed a Squid proxy caching server.

results from running pi-hole for several weeks

I came across pi-hole recently – an ad blocker and DNS service that you can run on a Raspberry Pi in Raspian (or any Debian or Ubuntu (ie Debian-like)) system. Using pi-hole should obviate the need for running ad-blockers in your browser (so long as you’re on a network that is running DNS queries through pi-hole).

I’ve seen some people running it on CentOS – but I’ve had issues with that combination, so am keeping to the .deb-based distros (specifically, I’m running it on the smallest droplet size from Digital Ocean with Ubuntu 16.04).

First the good – it is truly stupidly-simple to get setup and running. A little too simple – not because tools should have to be hard to use, but because there’s not much configuration that goes in the automated script. Also, updating the blacklist and whitelist are easy – though they don’t always update via the web portal as you’d hope.

Second, configuration is almost all manual: so, if you want to use more than 2 upstream DNS hosts (I personally want to hit both Google and Freenom upstream), for example, there is manual file editing. Or if you want to have basic auth enabled for the web portal, you need to not only add it manually, but you need to re-add it manually after any updates.

Third, the bad. This is not a pi-hole issue, per se, but it is still relevant: most devices that you would configure to use DNS for your home (or maybe even enterprise) want at least two entries (eg your cable modem, or home wifi router). You can set only one DNS provider with some devices, but not all. Which goes towards showing how pi-hole might not be best run outside your network – if you run piggy-back DHCP and DNS both off your RPi, and not off the wireless router you’re probably running, then you’re OK. But if your wireless router / cable modem demands multiple DNS entries, you either need to run multiple pi-hole servers somewhere, or you need to realize not everything will end up going through the hole.

Pi-hole sets up lighttpd instance (which you don’t have to use) so you can see a pretty admin panel:


I added basic authentication to the admin subdirectory by adding the following lines to /etc/lighttpd/lighttpd.conf after following this tutorial:

#add http basic auth
auth.backend = "htdigest"
auth.backend.htdigest.userfile = "/etc/lighttpd/.htpasswd/lighttpd-htdigest.user"
auth.require = ("/admin" =>
( "method" => "digest",
"realm" => "rerss",
"require" => "valid-user" )

I also have 4 upstream DNS providers in /etc/dnsmasq.d/01-pihole.conf:


I still need to SSLify the page, but that’s coming.

The 8.8.* addresses are Google’s public DNS. The 80.80.* addresses are Freenom’s. There are myriad more free DNS providers out there – these are just the ones I use.

So what’s my tl;dr on pi-hole? It’s pretty good. It needs a little work to get it more stable between updates – but it’s very close. And I bet if I understood a little more of the setup process, I could probably make a fix to the update script that wouldn’t clobber (or would restore) any custom settings I have in place.