antipaucity

fighting the lack of good ideas

sshuttle – a simple transparent proxy vpn over ssh

I found out about sshuttle from a random tweet that happened to catch my eye.

Here’s the skinny (from the readme):

  • Your client machine (or router) is Linux, FreeBSD, or MacOS.
  • You have access to a remote network via ssh.
  • You don’t necessarily have admin access on the remote network.
  • The remote network has no VPN, or only stupid/complex VPN protocols (IPsec, PPTP, etc). Or maybe you are the admin and you just got frustrated with the awful state of VPN tools.
  • You don’t want to create an ssh port forward for every single host/port on the remote network.
  • You hate openssh’s port forwarding because it’s randomly slow and/or stupid.
  • You can’t use openssh’s PermitTunnel feature because it’s disabled by default on openssh servers; plus it does TCP-over-TCP, which has terrible performance.

Here’s how I set it up on my Mac

Install homebrew:

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"

Install sshuttle (as a regular user):

brew install sshuttle

Test the connection to a server you have:

sudo sshuttle -r <user>@host.tld -x host.tld 0/0 -vv

I also made sure that my target server could be connected-to via certificate for my local root user – but you can use a password if you prefer.

Check your IP address:

curl https://ipv4.cf

Once you make sure the connection works, Ctrl-C to end the session.

Then setup an alias in your shell’s .profile (for me, it’s .bash_profile):

alias vpn='sudo sshuttle -r <user>@domain.tld -x domain.tld 0/0'

Other things you can do

According to the full docs, there are a lot more things you can do with sshuttle – including running it on your router, thereby VPN’ing your whole LAN through an endpoint! You can also run it in server mode.

This is a super useful little utility!

basic dockerized jitsi deployment with an apache reverse proxy on centos

After a friend of mine told me he wanted to deploy Jitsi on my main webserver, and me saying “sure”, I decided I wanted to get it up and running on a new server both so I knew how to do it, and to avoid the latency issues of videoconferencing from central North America to Germany and back.

Before I go into how I got it working, let me say that the official Quick Start guide is good – but it doesn’t cover anything but itself.

Here’s the basic setup:

What To Do:

Once you have your new CentOS instance up and running (I used Vultr), here’s everything you need to install:

yum -y install epel-release && yum -y upgrade && yum -y install httpd docker docker-compose screen bind-utils certbot git haveged net-tools mod_ssl

I also installed a few other things, but that’s because I’m multi-purposing this server for Squid, and other things, too.

Enable Apache, firewalld, & Docker:

systemctl enable httpd && systemctl enable docker && systemctl enable firewalld

Now get your swap space setup:

fallocate -l 4G /swapfile && chmod 0600 /swapfile && mkswap /swapfile && swapon /swapfile

Add the following line to the bottom of your /etc/fstab:

/swapfile swap swap default 0 0

Restart your VPS:

shutdown -r now

Get your cert from Let’s Encrypt (make sure you’ve already setup appropriate CAA & A records for your domain and any subdomains you want to use):

certbot -t -n --agree-tos --keep --expand --standalone certonly --must-staple --rsa-key-size 4096 --preferred-challenges dns-01,http-01 -m <user>@<domain.tld> -d <jitsi.yourdomain.tld>

Create a root crontab entry to run certbot frequently (I do @weekly ~/renew-le.sh)

Go to the home directory of whatever user you plan to run Jitsi as:

su - <jitsi-user>

Begin the Quick Start directions:

  • git clone https://github.com/jitsi/docker-jitsi-meet && cd docker-jitsi-meet
  • mv env.example .env
  • Change the timezone in .env from Europe/Amsterdam if you want it to show up in a sane timezone (like Etc/UTC)
  • mkdir -p ~/.jitsi-meet-cfg/{web/letsencrypt,transcripts,prosody,jicofo,jvb}
  • docker-compose up -d

Now configure Apache for SSL. Start with this reference I posted.

But in the [sub]domain-specific conf file z-[sub]domain-tld.conf, add proxy and authentication lines (so that only people you allow to use your video conference can actually use it):

ProxyPreserveHost on
ProxyPass / http://localhost:8000/ nocanon
ProxyPassReverse / http://localhost:8000/
ProxyRequests       off
ServerAdmin warren@warrenmyers.com
AllowEncodedSlashes NoDecode
<Proxy http://localhost:8000/*>
    Order deny,allow
    Allow from all
    Authtype Basic
    Authname "Password Required"
    AuthUserFile /etc/httpd/.htpasswd
    Require valid-user
</Proxy>
RewriteEngine       on
RewriteRule        ^/meetwith/(.*)$ http://%{HTTP_HOST}/$1 [P]
ProxyPassReverseCookiePath /meetwith /

Reload your configs, and make sure they’re happy, fixing any errors that may exist:

apachectl graceful

Setup at least one user who’ll be able to access the site:

htpasswd -B -c /etc/httpd/.htpasswd <user>

You should also configure firewalld to allow only what you want (http, https, ssh):

firewall-cmd --zone=public --add-service=http && firewall-cmd --zone=public --add-service=https && firewall-cmd --zone=public --add-service=ssh

With any luck, when you now navigate to https://[sub.]domain.tld in your web browser, and enter your username and password you created with htpasswd, you’ll get the Jitsi welcome page!

Other Resources:

automating mysql backups

I want to backup all of the MySQL databases on my server on a routine basis.

As I started asking how to get a list of all databases in MySQL on Stack Overflow, I came across this previous SO question, entitled, “Drop All Databases in MySQL” (the best answer for which, in turn, republished the kernel from this blog post). Thinking that sounded promising, I opened it and found this little gem:

mysql -uroot -ppassword -e "show databases" | grep -v Database | grep -v mysql| grep -v information_schema| |gawk '{print "drop database " $1 ";select sleep(0.1);"}' | mysql -uroot -ppassword

That will drop all databases. No doubt about it. But that’s not what I want to so, so I edited the leading command down to this:

`mysql -uroot -e "show databases" | grep -v Database | grep -v mysql| grep -v information_schema| grep -v test | grep -v OLD | grep -v performance_schema

Which gives back a list of all the databases created by a user.

Now I need a place to keep the dumps .. /tmp sounded good.

And each database should be in its own file, for I need mysqldump $db.identifier.extension

Made the ‘identifier’ the output of date +%s to get seconds since the Unix epoch (which is plenty unique enough for me).

All of which adds up to this one-liner:

for db in `mysql -uroot -e "show databases" | grep -v Database | grep -v mysql| grep -v information_schema| grep -v test | grep -v OLD | grep -v performance_schema`; do mysqldump $db > /tmp/$db.dump.`date +%s`.sql; done

Plop that puppy in root’s crontab on a good schedule for you, and you have a hand-free method to backup databases.

Thought about using xargs, but I couldn’t come up with a quick/easy way to uniquely identify each file in the corresponding output.

Might consider adding some compression and/or a better place for dumps to live and/or cleaning-up ‘old’ ones (however you want to determine that), but it’s a healthy start.


You can also do mysqldump --all-databases if you think you want to restore all of them simultaneously … I like the idea of individually dumping them for individual restoration / migration / etc.

The full script I am using (which does include backups, etc):

############################
#!/bin/bash

date

echo 'Archiving old database backups'

tar zcf mysql-dbs.`date +%s`.tar.gz ~/sqlbackups
rm -f ~/sqlbackups/*

date

echo 'Backing up MySQL / MariaDB databases'

for db in `mysql -uroot -e "show databases" | grep -v Database | grep -v mysql| grep -v information_schema| grep -v test | grep -v OLD | grep -v performance_schema`; do mysqldump $db > ~/sqlbackups/$db.dump.`date +%s`.sql; done

echo 'Done with backups. Files can be found in ~/sqlbackups'

date

change your default font in windows 10

Starting from a tutorial I found recently, I want to share how to change your default font in Windows 10 – but in a shorter edition than that long one (and in, I think, a less-confusing way).

Back in the Good Ole Days™, you could easily change system font preferences by right-clicking on your desktop, and going into the themes and personalization tab to set whatever you wanted however you wanted (this is also where you could turn off (or back on) icons on your desktop (like My Documents), set window border widths, colors for everything, etc).

Windows 10 doesn’t let you do that through any form of Control Panel anymore, so you need to break-out the Registry Editor*.

0th, Start regedit

WindowsKey-R brings up the Run dialog – type regedit to start the Registry Editor

2016-07-27 (3)

NOTE: you should back-up any keys you plan to edit, just in case you forget what you did, want to revert, or make a mistake.

1st, Navigate to the right key area
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\FontSubstitutes

2016-07-27&&

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Fonts

2016-07-27 (1)Are where you’ll need to be to make these changes.

2nd, Blank entries for Segoe UI

For all of the “Segoe UI” entries in Fonts, change their Data field to blank (“”)

3rd, Add a Segoe UI substitute font

In FontSubstitutes, click Edit->String Value. Name it “Segoe UI” (without the quotes). In the “Value data” field, enter your preferred font name. I used Lucida Console.

2016-07-27 (2)

4th, Logout, or reboot, and login again to see your changes take effect.

* You can also download my registry keys, which have the substitution already done here. And you can pick any other font instead of Lucida Console you like – just edit the key file in your favorite text editor (I like TextPad) before merging into your Registry.

turn on spf filtering with postfix and centos 7

After running my new server for a while, I was noticing an unusually-high level of bogus email arriving in my inbox – mail that was being spoofed to look like it was coming from myself (to myself).

After a great deal of research, I learned there is a component of the DNS specification that allows for TEXT or SPF records. Sender Policy Framework was developed to help mail servers identify whether or not messages are being sent by authorized servers for their representative domains.

While there is a huge amount of stuff that could be added into a SPF record, what I am using for my domains is:

"v=spf1 mx -all"

Note: some DNS providers (like Digital Ocean) will make you use a TEXT record instead of a dedicated SPF record (which my registrar / DNS provider Pairnic supports).

If they require it be via TEXT record, it’ll look something like this: TXT @ "v=spf1 a include:_spf.google.com ~all"

Starting with this old how-to I found for CentOS 6, I added the policy daemon for Postfix (though it’s now in Python and not Perl) thusly:

yum install pypolicyd-spf

(I already had the EPEL yum repository installed – to get it setup, follow their directions, found here.)

Then I edited the master.cf config file for Postfix, adding the following at the bottom:

policy unix - n n - 0 spawn user=nobody argv=/bin/python /usr/libexec/postfix/policyd-spf

Note: those are actually tabs in my config file – but spaces work, too.

When you’re done with your edits and record additions, restart Postfix:

systemctl restart postfix

Then you’ll see messages like this in your /var/log/maillog file:

Apr 23 18:58:59 khopesh postfix/smtpd[18199]: NOQUEUE: reject: RCPT from unknown[197.27.40.169]: 550 5.7.1 <warren@datente.com>: Recipient address rejected: Message rejected due to: SPF fail - not authorized. Please see http://www.openspf.net/Why?s=mfrom;id=warren@datente.com;ip=197.27.40.169;r=warren@datente.com; from=<warren@datente.com> to=<warren@datente.com> proto=ESMTP helo=<[197.27.40.169]>

And if you follow the directive to go visit the “Why” page on OpenSPF, you’ll see something like this explanation:


Why did SPF cause my mail to be rejected?

What is SPF?

SPF is an extension to Internet e-mail. It prevents unauthorized people from forging your e-mail address (see the introduction). But for it to work, your own or your e-mail service provider’s setup may need to be adjusted. Otherwise, the system may mistake you for an unauthorized sender.

Note that there is no central institution that enforces SPF. If a message of yours gets blocked due to SPF, this is because (1) your domain has declared an SPF policy that forbids you to send through the mail server through which you sent the message, and (2) the recipient’s mail server detected this and blocked the message.

warren@datente.com rejected a message that claimed an envelope sender address of warren@datente.com. warren@datente.com received a message from 197.27.40.169 that claimed an envelope sender address of warren@datente.com.

However, the domain datente.com has declared using SPF that it does not send mail through 197.27.40.169. That is why the message was rejected.


improve your entropy pool in linux

A few years ago, I ran into a known issue with one of the products I use that manifests when the Red Hat Linux server it’s running on has a low entropy pool. And, as highlighted in that question, the steps I found 5 years ago didn’t work for me (turns out modifying the t parameter from ‘1’ to ‘.1’ did work (rngd -r /dev/urandom -o /dev/random -f -t .1), but I digress (and it’s no longer correct in CentOS 7 (the ‘t’ option, that is))).

In playing around with the Mozilla-provided SSL configurator, I noticed a line in the example SSL config that referenced “truerand”. After a little Googling, I found an opensource implementation called “twuewand“.

And a little more Googling about adding entropy, and I came across this interesting tutorial from Digital Ocean for “haveged” (which, interestingly-enough, allowed me to answer a 6-month-old question on Server Fault about CloudLinux).

Haveged “is an attempt to provide an easy-to-use, unpredictable random number generator based upon an adaptation of the HAVEGE algorithm. Haveged was created to remedy low-entropy conditions in the Linux random device that can occur under some workloads, especially on headless servers.”

And twuewand “is software that creates hardware-generated random data. It accomplishes this by exploiting the fact that the CPU clock and the RTC (real-time clock) are physically separate, and that time and work are not linked.”

For workloads that require lots of entropy (generating SSL keys, SSH keys, PGP keys, and pretty much anything else that wants lots of random (or strong pseudorandom) seeding), the very real problem of running out of entropy (especially on headless boxes or virtual machines) is something you can face quite easily / frequently.

Enter solutions like OpenRNG which are hardware entropy generators (that one is a USB dongle (see also this skh-tec post)). Those are awesome – unless you’re running in cloud space somewhere, or even just a “traditional” virtual machine.

One of the funny things about getting “random” data is that it’s actually very very hard to get. It’s easy to describe, but generating “truly” random data is incredibly difficult. (If you want to have an aneurysm (or you’re like me and think this stuff is unendingly fascinating), go read the Wikipedia entry on “Cryptographically Secure Pseudo Random Number Generator“.)

If you’re in a situation, though, like I was (and still am), where you need to maintain a relatively high quantity of fairly decent entropy (probably close to CSPRNG level), use haveged. And run twuewand occasionally – at the very least when starting Apache (at least if you’re running HTTPS – which you should be, since it’s so easy now).

how to turn a google+ community into a quasi “mailing list”

Spurred by a recent question from an acquaintance in town, I asked on Google+ whether or not you can enable emailed notifications for a Community. This led to the elaborate Settings page for G+.

It turns out that if you combine enabling a Community’s “Community notifications” vertical-ellipsiscommunity-settings (under the specific Community’s settings (which you find by clicking the vertical ellipsis button on the Community page) with the following tree in your general Google+ settings, Notifications -> Email -> Communities -> Shares something with a community you get notifications from, notifications-emailyou get a “mailing list” of sorts from your Community, which, niftily enough, also allows you to comment on the post via email (at least on the first notification of said post)!