antipaucity

fighting the lack of good ideas

4 places to check your website’s ssl/tls security settings

Qualys – https://www.ssllabs.com/ssltest

High-Tech Bridge – https://www.htbridge.com/ssl

Comodo – https://sslanalyzer.comodoca.com

SSL Checker – https://www.sslchecker.com/sslchecker

ssl configuration for apache 2.4 on centos 7 with let’s encrypt

In follow-up to previous posts I’ve had about SSL (specifically with Let’s Encrypt), here is the set of SSL configurations I use with all my sites. These, if used correctly, should score you an “A+” with no warnings from ssllabs.com. Note: I have an improved entropy package installed (twuewand). This is adapted from the Mozilla config generator with specific options added for individual sites and/or to match Let’s Encrypt’s recommendations.

Please note: you will need to modify the config files to represent your own domains, if you choose to use these as models.

[/etc/httpd/conf.d/defaults.conf]

#SSL options for all sites
Listen 443
SSLPassPhraseDialog  builtin
SSLSessionCache         shmcb:/var/cache/mod_ssl/scache(512000)
SSLSessionCacheTimeout  300
Mutex sysvsem default
SSLRandomSeed startup builtin
SSLRandomSeed startup file:/dev/urandom  1024
# requires twuewand to be installed
SSLRandomSeed startup exec:/bin/twuewand 64
SSLRandomSeed connect builtin
SSLRandomSeed connect file:/dev/urandom 1024
SSLCryptoDevice builtin
# the SSLSessionTickets directive should work - but on Apache 2.4.6-45, it does not
#SSLSessionTickets       off
SSLCompression          off
SSLHonorCipherOrder	on
# there may be an unusual use case for enabling TLS v1.1 or 1 - but I don't know what that would be
SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
SSLCipherSuite ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
SSLOptions +StrictRequire
SSLUseStapling on
SSLStaplingResponderTimeout 5
SSLStaplingReturnResponderErrors off
SSLStaplingCache        shmcb:/var/run/ocsp(128000)

#all unknown requests get domain.tld (over http)
<VirtualHost *:80>
    DocumentRoot /var/html
    ServerName domain.tld
    ServerAlias domain.tld *.domain.tld
    ErrorLog logs/domain-error_log
    CustomLog logs/domain-access_log combined
    ServerAdmin user@domain.tld
    <Directory "/var/html">
         Options All +Indexes +FollowSymLinks
         AllowOverride All
         Order allow,deny
         Allow from all
    </Directory>
</VirtualHost>

SetOutputFilter DEFLATE
AddOutputFilterByType DEFLATE text/html text/plain text/xml text/javascript text/css text/php

[/etc/httpd/conf.d/z-[sub-]domain-tld.conf]

<Virtualhost *:80>
    ServerName domain.tld
# could use * instead of www if you don't use subdomains for anything special/separate
    ServerAlias domain.tld www.domain.tld
    Redirect permanent / https://domain.tld/
</VirtualHost>

<VirtualHost *:443>
    SSLCertificateFile /etc/letsencrypt/live/domain.tld/cert.pem
    SSLCertificateKeyFile /etc/letsencrypt/live/domain.tld/privkey.pem
# if you put "fullchain.pem" here, you will get an error from ssllabs
    SSLCertificateChainFile /etc/letsencrypt/live/domain.tld/chain.pem
    DocumentRoot /var/www/domain
    ServerName domain.tld
    ErrorLog logs/domain-error_log
    CustomLog logs/domain-access_log \
          "%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b"
    ServerAdmin user@domain.tld

# could put this in defaults.conf - I prefer it in each site config
    SSLEngine on

<Files ~ "\.(cgi|shtml|phtml|php3?)$">
    SSLOptions +StdEnvVars
</files>
<Directory "/var/www/cgi-bin">
    SSLOptions +StdEnvVars
</Directory>

SetEnvIf User-Agent ".*MSIE.*" \
         nokeepalive ssl-unclean-shutdown \
         downgrade-1.0 force-response-1.0

    <Directory "/var/www/domain">
         Options All +Indexes +FollowSymLinks
         AllowOverride All
         Order allow,deny
         Allow from all
    </Directory>

</VirtualHost>

I use the z....conf formatting to ensure all site-specific configs are loaded after everything else. That conveniently breaks every site into its own config file, too.

The config file for a non-https site is much simpler:

<VirtualHost *:80>
    DocumentRoot /var/www/domain
    ServerName domain.tld
    ServerAlias domain.tld *.domain.tld
    ErrorLog logs/domain-error_log
    CustomLog logs/domain-access_log combined
    ServerAdmin user@domain.tld
    <Directory "/var/www/domain">
         Options All +Indexes +FollowSymLinks
         AllowOverride All
         Order allow,deny
         Allow from all
    </Directory>
</VirtualHost>

If you’re running something like Nextcloud, you may want to turn on Header always set Strict-Transport-Security "max-age=15552000; includeSubDomains" in the <VirtualHost&gt directive for the site. I haven’t decided yet if I should put this in every SSL-enabled site’s configs or not.

automated let’s encrypt ssl certificate renewal on centos 7

In my how-to for Let’s Encrypt, I gave an example script that can be called via cron (or manually) which will renew Let’s Encrypt SSL certificates under CentOS 6.

If you want to do it on CentOS 7 (which is what I am now running), use the following:

cd ~/letsencrypt
git pull
systemctl stop httpd.service
~/letsencrypt/letsencrypt-auto --agree-tos --keep --rsa-key-size 2048 --standalone certonly -m user@domain.tld -d domain.tld [-d sub.domain.tld [-d ...]]
systemctl start httpd.service

Now, what does this script do? Step by step:

  1. clear-out the last grab of the Let’s Encrypt git repo (there’s probably a better way to do this, but I don’t know what it is)
  2. go to root’s home (/root)
  3. clone-down the Let’s Encrypt toolset
  4. stop httpd (Apache in my case, though you might be running nginx or something else
  5. run the cert tool in automated form:
    1. agree to terms of service
    2. keep current cert if it doesn’t need to be updated
    3. key size of 2048 bits
    4. run the standalone webserver to verify “ownership” of the domain
    5. generate just the cert
    6. administrative email (optional, but “encouraged”)
    7. domain(s) to issue cert for (must be individually identified with successive -d flags; LE does not support wildcard certs)
  6. restart httpd

I set mine to run @weekly in cron@monthly is likely good enough, but since it’s “free” to run, running slightly more than is necessary seems good to me. Plus, if you’re getting SSL certs for many domains all being served from the same server, they may have different expiration dates, so running more often is better.

My crontab entry for renewing certs:

@weekly /root/renew-le-ssl.sh

let’s encrypt centos 6 – truly free ssl

There’s been quite a bit of excitement surrounding Let’s Encrypt recently – a truly 100% free SSL issuer.

Last week I helped a friend of mine get his first Let’s Encrypt certificate generated and configured for his website. One of the things I found incredibly frustrating is that Let’s Encrypt does not have a package for Red Hat/CentOS/Fedora! Ignoring such a massive installed base seems monumentally dumb – so I hope that they correct it soon. Until they do, however, here’s a tutorial that should cover the gotchas for getting Let’s Encrypt to work on a CentOS 6 server with Apache 2.

The documentation (as of 06 Jan 2015) on the Let’s Encrypt website is in error in a few places (or, at least, not as correct as is could/should be). One big thing to note, for example, is that it says Python 2.6 is supported (the current release for RHEL/CentOS 6). If you run the certificate generator without the --debug flag, though, it will error-out saying Python 2.6 is not supported.

While I used an existing CentOS 6 server, I’ll start this tutorial as I have many others – by telling you to go get a CentOS 6 server from Digital Ocean or Chunk Host.

Preliminaries

Login as root (or a sudo-privileged account – but root is easier), and install Apache, Python, and SSLyum install httpd python mod_ssl.

Also enable the EPEL repository: yum install epel-repository (available from the CentOS Extras repository. I’m going to assume you are familiar with configuring Apache, and will only provide the relevant snippets from ssl.conf herein.

Now that the basics are done, let’s move to Let’s Encrypt. I ran the tool in interactive mode (which is going to require ncurses to be available – it’s probably already installed on your system) – but you’ll want to add a crontab entry since Let’s Encrypt certs expire after 90 days, so I’ll compact the interactive session into a single command-line call at the end, which you’ll need to “know” how to do, since the --help argument doesn’t do anything yet (that I could find).

Initial Certificate Creation

First, grab the latest Let’s Encrypt from GitHub:
git clone https://github.com/letsencrypt/letsencrypt && cd letsencrypt

Stop Apache: service httpd stop. Let’s Encrypt is going to try to bind to ports 80 and 443 to ensure you have control the domain.

Now run the letsencrypt-auto tool – in debug mode so it’ll work with Python 2.6: ./letsencrypt-auto --debug certonly.

Use certonly because the plugins to automate installing for Apache and Nginx don’t work on CentOS yet.

Enter your domain name(s) for which you want to issue a certificate. If you accept incoming connections to www.domain.tld and domain.tld, be sure to put both in the list (likewise, if you have, say, blog.domain.tld that you want included).

Enter an administrative email address.

When the tool finishes, it’ll put symlinks in /etc/letsencrypt/live/domain.tld, with the “actual” certs in /etc/letsencrypt/archive/domain.tld. We’re going to reference the symlinks in /etc/letsencrypt/live/domain.tld next.

Edit /etc/httpd/conf.d/ssl.conf (I prefer emacs – but use whatever you prefer), and add the following lines in your VirtualHost directive:
SSLCertificateFile /etc/letsencrypt/live/domain.tld/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/domain.tld/privkey.pem
SSLCACertificateFile /etc/letsencrypt/live/domain.tld/cert.pem

Restart Apacheservice httpd start.

Try hitting https://domain.tld in your web browser – and you should be golden!

Automating Renewal

Create a small shell script called renew-LE-certs.sh somewhere you’ll remember where it is – like /root:
service httpd stop
# add additional '-d' entries for more subdomains
/path/to/letsencrypt/letsencrypt-auto --debug --keep --agree-tos --rsa-key-size 2048 certonly -m ssladmin@domain.tld -d domain.tld -d www.domain.tld
service httpd start

For your crontab entry, do the following to setup monthly cert renewal:
@monthly /path/to/renew-LE-certs.sh

rethinking pi-hole (again)

About 2 years ago, I started running Pi-hole as a DNS resolver and ad-blocker. Then last year, I ditched it.

After seeing a recent post by Troy Hunt, though, I thought it might be worth revisiting..but I needed a better way to control how it worked.

Enter OpenVPN – a service I already run on three endpoints. Here’s what I did:

Install Pi-hole per the usual (curl -sSL https://install.pi-hole.net | bash if you’re feeling brave, curl -sSL https://install.pi-hole.net, inspect, then run, if you’re feeling a little more wary).

This time, though, I set my upstream DNS providers to Cloudflare (1.1.1.1) and Quad9 (9.9.9.9) instead of Freenom and Google.

I also did a two-step install – once with Pi-hole listening on the primary network interface on my OpenVPN endpoint (ie the public IP), and then, once I made sure all was happy, I flipped it to listen on tun0 – the OpenVPN-provided interface. This means Pi-hole can only hear DNS queries if you’re connected to the VPN.

Why the change from how I’d done it before? Two reasons (at least):

First, if you leave Pi-hole open to the world, you can get involved in DNS amplification attacks. That is muy no bueno.

Second, sometimes I don’t care about ads – sometimes I do. I don’t care, for example, most of the time when I’m home. But when I’m traveling or on my iPhone? I care a lot more then.

Bonus – since it’s only “working” when connected to my VPN, it’s super easy to check if a site isn’t working because of Pi-hole, or because it just doesn’t like my browser (hop off the VPN, refresh, and see if all is well that wasn’t when on the VPN).

Changes you need to make to your OpenVPN’s server.conf:


push "dhcp-option DNS 10.8.0.1"

This ensures clients use the OpenVPN server as their DNS resolver. (Note: 10.8.0.1 might not be your OpenVPN parent IP address; adjust as necessary.) Restart OpenVPN after making this change.

My setupVars.conf for Pi-hole:


PIHOLE_INTERFACE=tun0
IPV4_ADDRESS=10.8.0.1/24
IPV6_ADDRESS=
QUERY_LOGGING=true
INSTALL_WEB_SERVER=true
INSTALL_WEB_INTERFACE=true
LIGHTTPD_ENABLED=false
WEBPASSWORD=01f3217c12bcdf8aa0ca08cdf737f99cd68a46dbdc92ce35fd75f39ce2faaf81
DNSMASQ_LISTENING=single
PIHOLE_DNS_1=1.1.1.1
PIHOLE_DNS_2=1.0.0.1
PIHOLE_DNS_3=9.9.9.9
DNS_FQDN_REQUIRED=true
DNS_BOGUS_PRIV=true
DNSSEC=false
CONDITIONAL_FORWARDING=false

I tried getting lighttpd to only listen on on port 443 so I could use Let’s Encrypt’s SSL certs following a handful of tutorials and walk-throughs, but was unsuccessful. So I disabled lighttpd, and only start it by hand if I want to check on my Pi-hole’s status.

Speaking of which, as I write this, here is what the admin console looks like:

admin console screenshot

Hope this helps you.

a fairly comprehensive squid configuration for proxying all the http things

After combing through the docs and several howtos on deploying the Squid proxy server – none of which really did everything I wanted, of course – I’ve finally gotten to the format below.

Installing Squid is easy-peasy – it’s in the standard package repos for the major platforms (CentOS/Fedora/RHEL, Ubuntu/Debian, etc) – so just run yum install squid or apt install squid on your platform of choice (my exact install command on Ubuntu 18.04 was apt -y install squid net-tools apache2-utils).

What I wanted was an “open” (password-protected) proxy server with disk-based caching enabled that would cover all of the ports I could reasonably expect to run into.

Why “open”? Because I want to be able to turn it on and off on various mobile devices which may (or may not) have stable-ish public IPs.

Here is the config as I have it deployed, minus sensitive/site-specific items (usernames, passwords, port, etc), of course:


A working /etc/squid/squid.conf

acl SSL_ports port 443
acl SSL_ports port 8443
acl Safe_ports port 80		# http
acl Safe_ports port 21		# ftp
acl Safe_ports port 443		# https
acl Safe_ports port 1025-65535	# unregistered ports
acl Safe_ports port 280		# http-mgmt
acl Safe_ports port 488		# gss-http
acl Safe_ports port 777		# multiling http
acl Safe_ports port 8080
acl CONNECT method CONNECT

auth_param basic program /usr/lib/squid/basic_ncsa_auth /etc/squid/.htpasswd
auth_param basic children 15
# after "realm", put some descriptive, clever, or otherwise-identifying string that will appear when you login
auth_param basic realm Insert Incredibly Witty Title Here
auth_param basic credentialsttl 5 hours
acl password proxy_auth REQUIRED
http_access allow password

# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports

# Only allow cachemgr access from localhost
http_access allow localhost manager
http_access deny manager

#http_access allow localnet
http_access allow localhost

# And finally deny all other access to this proxy
# commented-out to allow "open" use (ie password authenticated)
#http_access deny all

# Squid normally listens to port 3128
# change this line if you want it to listen on something other port
# http_port 

# Uncomment and adjust the following to add a disk cache directory.
#cache_dir ufs /var/spool/squid 100 16 256
# format is      
cache_dir ufs /tmp/squid-cache 768 16 256

# Leave coredumps in the first cache dir
coredump_dir /var/spool/squid

# Add any of your own refresh_pattern entries above these.
#
refresh_pattern ^ftp:		1440	20%	10080
refresh_pattern ^gopher:	1440	0%	1440
refresh_pattern -i (/cgi-bin/|\?) 0	0%	0
refresh_pattern (Release|Packages(.gz)*)$      0       20%     2880
refresh_pattern .		0	20%	4320

via off
forwarded_for off

request_header_access Allow allow all 
request_header_access Authorization allow all 
request_header_access WWW-Authenticate allow all 
request_header_access Proxy-Authorization allow all 
request_header_access Proxy-Authenticate allow all 
request_header_access Cache-Control allow all 
request_header_access Content-Encoding allow all 
request_header_access Content-Length allow all 
request_header_access Content-Type allow all 
request_header_access Date allow all 
request_header_access Expires allow all 
request_header_access Host allow all 
request_header_access If-Modified-Since allow all 
request_header_access Last-Modified allow all 
request_header_access Location allow all 
request_header_access Pragma allow all 
request_header_access Accept allow all 
request_header_access Accept-Charset allow all 
request_header_access Accept-Encoding allow all 
request_header_access Accept-Language allow all 
request_header_access Content-Language allow all 
request_header_access Mime-Version allow all 
request_header_access Retry-After allow all 
request_header_access Title allow all 
request_header_access Connection allow all 
request_header_access Proxy-Connection allow all 
request_header_access User-Agent allow all 
request_header_access Cookie allow all 
request_header_access All deny allroot

Finalize your Squid server system settings

Things you need to do once you do the above (prepend sudo to each command below if youre not logged-in as root:

  1. Enable Squid to start at boot: systemctl enable squid
  2. Create the cache directories: squid -z
  3. Create a DNS entry for your proxy host (if you want it usable outside your home network, and don’t want to reference it by IP address only)
  4. Create the authentication file (located at /etc/squid/.htpasswd in this example): touch /etc/squid/.htpasswd
  5. Create a username and password: htpasswd -c /etc/squid/.htpasswd (don’t forget this username/password combination!)
  6. Start Squid: systemctl start squid

Configure your browser to use your new proxy

Here’s where you need to go and what you need to change in Firefox:

  1. Navigate to about:preferences
  2. Click on Settings… under Network Proxy
  3. Enter your proxy host details:

To verify your proxy settings are correct, visit IPv4.cf with both the proxy off, and then again with it on.

If your reported IP address changes between visits (with the second check being your Squid server IP) – congratulations! You have successfully deployed a Squid proxy caching server.

but, i got them on sale!

Back in August 2008, I had a one-week “quick start” professional services engagement in Nutley New Jersey. It was a supposed to be a super simple week: install HP Server Automation at BT Global.

Another ProServe engineer was onsite to setup HP Network Automation.

Life was gonna be easy-peasy – the only deliverable was to setup and verify a vanilla HPSA installation.

Except, like every Professional Services engagement in history, all was not as it seemed.

First monkey wrench: our primary technical contact / champion was an old-hat Sun Solaris fan (to the near-exclusion of any other OS for any purpose – he even wanted to run SunOS on his laptop).

Second monkey wrench: expanding on the first, out technical contact was super excited about the servers he’d gotten just the weekend before from Sun because they were “on sale”.

It’s time for a short background digression. Because technical intricacies matter.

HP Server Automation was written on Red Hat Linux. It worked great on RHEL. But, due to some [large] customer requests, it also supported running on Sun Solaris.

In 2007, Sun introduced a novel architecture dubbed, “Niagara”, or UltraSPARC T1, which they offered in their T1000 and T2000 series servers. Niagara did several clever things – it offered multiple threads running per core, with as many as 32 simultaneous processes running.

According to AnandTech, the UltraSPARC T1 was a “72 W, 1.2 GHz chip almost 3 times (in SpecWeb2005) as fast as four Xeon cores at 2.8 GHz”.

But there is always a tradeoff. The tradeoff Sun chose for the first CPU in the product line was to share a single FPU (floating point unit) between the integer cores and pipelines. For workloads that mostly involve static / simple data (ie, not much in the way of calculation), they were blazingly fast.

But sharing an FPU brings problems when you need to actually do floating-point math – as cryptographic algorithms and protocols all end up relying upon for gathering entropy for their random value generation processes. Why does this matter? Well, in the case of HPSA, not only is all interprocess, intraserver, and interserver communication secured with HTTPS certificates, but because large swaths are written in Java, each JVM needs to emulate its own FPU – so not only is the single FPU shared between all of the integer cores of the T1 CPU, it is further time-sliced and shared amongst every JRE instance.

At the time, the “standard” reboot time for a server running in an SA Core was generally benchmarked at ~15-20 minutes. That time encompassed all of the following:

  • stop all SA processes (in the proper order)
  • stop Oracle
  • restart the server
  • start Oracle
  • start all SA components (in the proper order)

As you’ll recall from my article on the Sun JRE 1.4.x from 6.5 years ago, there is a Java component (the Twist) that already takes a long time to start as it seeds its entropy pool.

So when it is sharing the single FPU not only between other JVMs, but between every other process which might end up needing it, the total start time is reduced dramatically.

How dramatically? Shutdown alone was taking upwards of 20 minutes. Startup was north of 35 minutes.

That’s right – instead of ~15-20 minutes for a full restart cycle, if you ran HPSA on a T1-powered server, you were looking at ~60+ minutes to restart.

Full restarts, while not incredibly common, are not all that unordinary, either.

At the time, it was not unusual to want to fully restart an HPSA Core 2-3 times per month. And during initial installation and configuration, restarts need to happen 4-5 times in addition to the number of times various components are restarted during installation as configuration files are updated, new processes and services are started, etc.

What should have been about a one-day setup, with 2-3 days of knowledge transfer – turned into nearly 3 days just to install and initially configure the software.

And why were we stuck on this “revolutionary” hardware? Because of what I noted earlier: our main technical contact was a die-hard Solaris fanboi who’d gotten these servers “on sale” (because their Sun rep “liked them”).

How big a “sale” did he get? Well, his sales rep told him they were getting these last-model-year boxes for 20% off list plus an additional 15% off! That sounds pretty good – depending on how you do the math, he was getting somewhere between 32% and 35% off the list price – for a little over $14,000 a piece (they’d bought two servers – one to run Oracle RDBMS (which Oracle themselves recommended not running on the T1 CPU family), and the other to run HPSA proper).

Except his sales rep lied. Flat-out lied. How do I know? Because I used Sun’s own server configurator site and was able to configure two identical servers for just a smidge over $15,000 each – with no discounts. That means they got 7% off list …
tops.

So not only were they running hardware barely discounted off list (and, interestingly, only slightly cheaper (less than $2000) than the next generation T2-powered servers which had a single FPU per core, not per CPU (which still had some performance issues, but at least weren’t dog-vomit slow), but they were running on Solaris – which had always been a second-class citizen when it came to HPSA performance: all things being roughly equal, x86 hardware running RHEL would always smack the pants off SPARC hardware running Solaris under Server Automation.

For kicks, I configured a pair of servers from Dell (because their online server configurator worked a lot better than any other I knew of, and because I wanted to demonstrate that just because SA was an HP product didn’t mean you had to run HP servers), and was able to massively out-spec two x86 servers for less than $14,000 a pop (more CPU cores, more RAM, more storage, etc) and present my findings as part of our write-up of the week.

Also for kicks, I demoed SA running in a 2-CPU, 4GB VM on my laptop rebooting faster than either T1000 server they had purchased could run.

Whats the moral of this story? There’s two (at least):

  1. Always always always find out from your vendor if they have a preferred or suggested architecture before namby-pamby buying hardware from your favorite sales rep, and
  2. Be ever ready and willing to kick your preconceived notions to the sidelines when presented with evidence that they are not merely ill thought out, but out and out, objectively wrong

These are fundamental tenets of automation:

“Too many people try to take new tools and make them fit their current processes, procedures, and policies – rather than seeing what policies, procedures, and processes are either made redundant by the new tools, or can be improved, shortened, or – wait for it – automated!”

You must always be reviewing and rethinking your preconceived notions, what policies you’re currently following, etc. As I heard recently, you need to reverse your benchmarks: don’t ask, “why are we doing X?”; ask, “what would happen if we didn’t do X?”

That was a question never asked by anyone prior to our arrival to implement what sales had sold them.