Tag Archives: linux

hey, virtualbox – don’t be retarded

Ran across this error recently in an Ubuntu guest on my VirtualBox install: VBoxClient: (seamless): failed to start, Stage: Setting guest IRQ filter mask Error: VERR_INTERNAL_ERROR

Gee, isn’t that a useful message.

Fortunately, there was a forums.virtualbox thread on just this error.

The upshot is that this error is actually caused because of a failure during the initial install of the VirtualBox Guest Additions.

In the middle of what looks like, at quick glance, a successful GA installation, is this nugget: Please install the gcc make perl packages from your distribution.

The GA installer can’t compile kernel modules without a compiler.

And that makes sense.

What doesn’t make sense is that this error is even possible to get! The GA installer must run as root (or via sudo).

If those package are missing, the installer should stop what it’s doing, ask the user if they want to install these packages (because without them the GA installer won’t install everything), and then when the user invariably answers “yes” (because – duh! – why wouldn’t they want this to work?), go run an apt -y install gcc make perl.

But is that what Oracle in their infinite wisdom decide to do?

No. They decided it’s better to just quietly report in the middle of a bunch of success statements that “oh, by the way – couldn’t actually do what you wanted, but if you don’t notice, you’re going to spend hours on Google trying to figure it out”.

Morons.

It realy isn’t that hard to make human-friendly error messages … nor to even try to pre-solve the error condition you found!

fallocate vs dd for swap file creation

I recently ran across this helpful Digital Ocean community answer about creating a swap file at droplet creation time.

So I decided to test how long using my old method (using dd) takes to run vs using fallocate.

Here’s how long it takes to run fallocate on a fresh 40GB droplet:

root@ubuntu:/# rm swapfile && time fallocate -l 1G /swapfile
real	0m0.003s
user	0m0.000s
sys	0m0.000s

root@ubuntu:/# rm swapfile && time fallocate -l 2G /swapfile
real	0m0.004s
user	0m0.000s
sys	0m0.000s

root@ubuntu:/# rm swapfile && time fallocate -l 4G /swapfile
real	0m0.006s
user	0m0.000s
sys	0m0.004s

root@ubuntu:/# rm swapfile && time fallocate -l 8G /swapfile
real	0m0.007s
user	0m0.000s
sys	0m0.004s

root@ubuntu:/# rm swapfile && time fallocate -l 16G /swapfile
real	0m0.012s
user	0m0.000s
sys	0m0.008s

root@ubuntu:/# rm swapfile && time fallocate -l 32G /swapfile
real	0m0.029s
user	0m0.000s
sys	0m0.020s

Interestingly, the relationship of size to time is non-linear when running fallocate.

Compare to building a 4GB swap file with dd (on the same server, it turned out using either a 16KB or 4KB bs gives the fastest run time):

time dd if=/dev/zero of=/swapfile bs=16384 count=262144 

262144+0 records in
262144+0 records out
4294967296 bytes (4.3 GB, 4.0 GiB) copied, 4.52602 s, 949 MB/s

real	0m4.528s
user	0m0.048s
sys	0m4.072s

Yes, you read that correctly – using dd with an “optimum” bs of 16KB (after much testing different bs settings) takes ~1000x as long as using fallocate to create the same “size” file!

How is fallocate so much faster? The details are in the man pages for it (emphasis added):

fallocate is used to manipulate the allocated disk space for a file, either to deallocate or preallocate it. For filesystems which support the fallocate system call, preallocation is done quickly by allocating blocks and marking them as uninitialized, requiring no IO to the data blocks. This is much faster than creating a file by filling it with zeroes.

dd will “always” work. fallocate will work almostall of the time … but if you happen to be using a filesystem which doesn’t support it, you need to know how to use dd.

But: if your filesystem supports fallocate (and it probably does), it is orders of magnitude more efficient to use it for file creation.

ssl configuration for apache 2.4 on centos 7 with let’s encrypt

In follow-up to previous posts I’ve had about SSL (specifically with Let’s Encrypt), here is the set of SSL configurations I use with all my sites. These, if used correctly, should score you an “A+” with no warnings from ssllabs.com. Note: I have an improved entropy package installed (twuewand). This is adapted from the Mozilla config generator with specific options added for individual sites and/or to match Let’s Encrypt’s recommendations.

Please note: you will need to modify the config files to represent your own domains, if you choose to use these as models.

[/etc/httpd/conf.d/defaults.conf]

#SSL options for all sites
Listen 443
SSLPassPhraseDialog  builtin
SSLSessionCache         shmcb:/var/cache/mod_ssl/scache(512000)
SSLSessionCacheTimeout  300
Mutex sysvsem default
SSLRandomSeed startup builtin
SSLRandomSeed startup file:/dev/urandom  1024
# requires twuewand to be installed
SSLRandomSeed startup exec:/bin/twuewand 64
SSLRandomSeed connect builtin
SSLRandomSeed connect file:/dev/urandom 1024
SSLCryptoDevice builtin
# the SSLSessionTickets directive should work - but on Apache 2.4.6-45, it does not
#SSLSessionTickets       off
SSLCompression          off
SSLHonorCipherOrder	on
# there may be an unusual use case for enabling TLS v1.1 or 1 - but I don't know what that would be
SSLProtocol all -SSLv2 -SSLv3 -TLSv1 -TLSv1.1
SSLCipherSuite ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
SSLOptions +StrictRequire
SSLUseStapling on
SSLStaplingResponderTimeout 5
SSLStaplingReturnResponderErrors off
SSLStaplingCache        shmcb:/var/run/ocsp(128000)

#all unknown requests get domain.tld (over http)
<VirtualHost *:80>
    DocumentRoot /var/html
    ServerName domain.tld
    ServerAlias domain.tld *.domain.tld
    ErrorLog logs/domain-error_log
    CustomLog logs/domain-access_log combined
    ServerAdmin user@domain.tld
    <Directory "/var/html">
         Options All +Indexes +FollowSymLinks
         AllowOverride All
         Order allow,deny
         Allow from all
    </Directory>
</VirtualHost>

SetOutputFilter DEFLATE
AddOutputFilterByType DEFLATE text/html text/plain text/xml text/javascript text/css text/php

[/etc/httpd/conf.d/z-[sub-]domain-tld.conf]

<Virtualhost *:80>
    ServerName domain.tld
# could use * instead of www if you don't use subdomains for anything special/separate
    ServerAlias domain.tld www.domain.tld
    Redirect permanent / https://domain.tld/
</VirtualHost>

<VirtualHost *:443>
    SSLCertificateFile /etc/letsencrypt/live/domain.tld/cert.pem
    SSLCertificateKeyFile /etc/letsencrypt/live/domain.tld/privkey.pem
# if you put "fullchain.pem" here, you will get an error from ssllabs
    SSLCertificateChainFile /etc/letsencrypt/live/domain.tld/chain.pem
    DocumentRoot /var/www/domain
    ServerName domain.tld
    ErrorLog logs/domain-error_log
    CustomLog logs/domain-access_log \
          "%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b"
    ServerAdmin user@domain.tld

# could put this in defaults.conf - I prefer it in each site config
    SSLEngine on

<Files ~ "\.(cgi|shtml|phtml|php3?)$">
    SSLOptions +StdEnvVars
</files>
<Directory "/var/www/cgi-bin">
    SSLOptions +StdEnvVars
</Directory>

SetEnvIf User-Agent ".*MSIE.*" \
         nokeepalive ssl-unclean-shutdown \
         downgrade-1.0 force-response-1.0

    <Directory "/var/www/domain">
         Options All +Indexes +FollowSymLinks
         AllowOverride All
         Order allow,deny
         Allow from all
    </Directory>

</VirtualHost>

I use the z....conf formatting to ensure all site-specific configs are loaded after everything else. That conveniently breaks every site into its own config file, too.

The config file for a non-https site is much simpler:

<VirtualHost *:80>
    DocumentRoot /var/www/domain
    ServerName domain.tld
    ServerAlias domain.tld *.domain.tld
    ErrorLog logs/domain-error_log
    CustomLog logs/domain-access_log combined
    ServerAdmin user@domain.tld
    <Directory "/var/www/domain">
         Options All +Indexes +FollowSymLinks
         AllowOverride All
         Order allow,deny
         Allow from all
    </Directory>
</VirtualHost>

If you’re running something like Nextcloud, you may want to turn on Header always set Strict-Transport-Security "max-age=15552000; includeSubDomains" in the <VirtualHost&gt directive for the site. I haven’t decided yet if I should put this in every SSL-enabled site’s configs or not.

results from running pi-hole for several weeks

I came across pi-hole recently – an ad blocker and DNS service that you can run on a Raspberry Pi in Raspian (or any Debian or Ubuntu (ie Debian-like)) system. Using pi-hole should obviate the need for running ad-blockers in your browser (so long as you’re on a network that is running DNS queries through pi-hole).

I’ve seen some people running it on CentOS – but I’ve had issues with that combination, so am keeping to the .deb-based distros (specifically, I’m running it on the smallest droplet size from Digital Ocean with Ubuntu 16.04).

First the good – it is truly stupidly-simple to get setup and running. A little too simple – not because tools should have to be hard to use, but because there’s not much configuration that goes in the automated script. Also, updating the blacklist and whitelist are easy – though they don’t always update via the web portal as you’d hope.

Second, configuration is almost all manual: so, if you want to use more than 2 upstream DNS hosts (I personally want to hit both Google and Freenom upstream), for example, there is manual file editing. Or if you want to have basic auth enabled for the web portal, you need to not only add it manually, but you need to re-add it manually after any updates.

Third, the bad. This is not a pi-hole issue, per se, but it is still relevant: most devices that you would configure to use DNS for your home (or maybe even enterprise) want at least two entries (eg your cable modem, or home wifi router). You can set only one DNS provider with some devices, but not all. Which goes towards showing how pi-hole might not be best run outside your network – if you run piggy-back DHCP and DNS both off your RPi, and not off the wireless router you’re probably running, then you’re OK. But if your wireless router / cable modem demands multiple DNS entries, you either need to run multiple pi-hole servers somewhere, or you need to realize not everything will end up going through the hole.

Pi-hole sets up lighttpd instance (which you don’t have to use) so you can see a pretty admin panel:

pihole

I added basic authentication to the admin subdirectory by adding the following lines to /etc/lighttpd/lighttpd.conf after following this tutorial:

#add http basic auth
auth.backend = "htdigest"
auth.backend.htdigest.userfile = "/etc/lighttpd/.htpasswd/lighttpd-htdigest.user"
auth.require = ("/admin" =>
( "method" => "digest",
"realm" => "rerss",
"require" => "valid-user" )
)

I also have 4 upstream DNS providers in /etc/dnsmasq.d/01-pihole.conf:

server=80.80.80.80
server=8.8.8.8
server=8.8.4.4
server=80.80.81.81

I still need to SSLify the page, but that’s coming.

The 8.8.* addresses are Google’s public DNS. The 80.80.* addresses are Freenom’s. There are myriad more free DNS providers out there – these are just the ones I use.

So what’s my tl;dr on pi-hole? It’s pretty good. It needs a little work to get it more stable between updates – but it’s very close. And I bet if I understood a little more of the setup process, I could probably make a fix to the update script that wouldn’t clobber (or would restore) any custom settings I have in place.

watch your mtu size in openstack

For a variety of reasons related to package versions and support contracts, I was unable to use the Red Hat built KVM image of RHEL 7.2 for a recent project. (The saga of that is worthy of its own post – and maybe I’ll write it at some point. But not today.)

First thing I tried was to build an OpenStack instance off of the RHEL 7.2 media ISO directly – but that didn’t work.

So I built a small VM on another KVM host – with virt-viewer, mirt-manager, etc – got it setup and ready to go, then went through the process of converting the qcow image to raw, and plopping it into the OpenStack image inventory.

Then I deployed the two VMs I need for my project (complete with additional disk space, yada yada yada). So far, so good.

Floating IP assigned to the app server, proper network for both, static configs updated. Life is good.

Except I cannot ssh out from the newly-minted servers anywhere. Or if it will ssh out, it’s super laggy.

I could ssh-in, but not out. I could scp out (to some locales, but not others), but was not getting nearly the transfer rates I should have been seeing. Pings worked just fine. So did nslookup.

After a couple hours of fruitless searching, got a hold of one of my coworkers who setup our OpenStack environment: maybe he’d know something.

We spent another about half hour on the phone, when he said, “hey – what’s your MTU set to?” “I dunno – whatever’s default, I guess. “Try setting it to 1450.”

Why 1450? What’s wrong with the default of 1500? Theoretically, the whole reason defaults are, well, default, is that they should “just work”. In other words, they might not be optimal for your situation, but they should be more-or-less optimalish for most situations.

Unless you’re in a basically-vanilla “layered networking” environment (apologies if “layered networking” is the wrong term, it’s the one my coworker used, and it made sense to me – networking isn’t really my forte). Fortunately, my colleague had seen an almost-identical problem several months ago playing with Docker containers. The maximum transmission unit is the cap on the network packet size, which is important to set in a TCP/IP environment – otherwise devices on the network won’t know how much data they can see at once from each other.

1500 bytes is the default for most systems, as I mentioned before, but when you have a container / virtual machine / etc hosted on a parent system whose MTU is set to 1500, the guest cannot have as large an MTU because then the host cannot attach whatever extra routing bits it needs to identify which guest gets what data when it comes back. For small network requests, such as ping uses, you’re nowhere near the MTU, so they work without a hitch.

For larger requests, you can (and will) start running into headspace issues – so either the guest MTU needs to shrink, or the host needs to grow.

Growing the host’s MTU isn’t a great option in a live environment – because it could disrupt running guests. So shrinking the guest MTU needs to be done instead.

Hopefully this helps somebody else.

Now you know, and knowing is half the battle.

helping a magpierss-powered site perform better

I rely on MagpieRSS to run one of my websites. (If you'd like to see the basic code for the site, see my GitHub profile.)

One of the drawbacks to Magpie, and dynamic websites in general, is they can be bottlenecked by external sources – in the case of Magpie, those sources are the myriad RSS feeds that Datente draws from.

To overcome some of this sluggishness, and to take better advantage of the caching feature of Magpie, I recently started a simple cron job to load every page on the site every X minutes – this refreshes the cache, and helps ensure reader experience is more performant. By scheduling a background refresh of every page, I cut average page load times by nearly a factor of 10! While this is quite dramatic, my worst-performing page was still taking upwards of 10 seconds to load a not-insignificant percentage of the time (sometimes more than a minute!) 🙁

Enter last week's epiphany – since RSS content doesn't change all that often (even crazy-frequent-updating feeds rarely exceed 4 updates per hour), I could take advantage of a "trick", and change the displayed pages to be nearly static (I still have an Amazon sidebar that's dynamically-loaded) – with this stupidly-simple hack, I cut the slowest page load time from ~10-12 seconds to <1: or another 10x improvement!

"What is the 'trick'," you ask? Simple – I copied every page and prefixed it with a short character sequence, and then modified my cron job to still run every X minutes, but now call the "build" pages, redirecting the response (which is a web page, of course) into the "display" pages. In other words, make the display pages static by building them in the background every so often.

If you'd like to see exactly how I'm doing this for one page (the rest are similar), check out this stupidly-short shell script:

(time (/bin/curl -f http://datente.com/genindex.php > ~/temp.out)) 2>&1 | grep real

(The time is in there for my cron reports.)

Compare the run time to the [nearly] static version:

(time (/bin/curl -f http://datente.com/index.php > ~/temp.out)) 2>&1 | grep real

how did i never know about .ssh/config?

I’m sure folks have tried to explain this to me before, but it wasn’t until today that it finally clicked – using .ssh/config will save you a world of hurt when managing various systems from a Linux host (I imagine it works on other platforms, too – but I’ve only started using it on CentOS).

Following directions I found here, I started a config file on a server I use as a jump box. In it I have an entry for my web server, and I’ll be adding other frequently-accessed servers to it as time goes on.

Thanks, nerderati, man pages … and whomever else tried to explain this to me before but I didn’t grok.