antipaucity

fighting the lack of good ideas

do you leak?

It would seem I have configured OpenVPN, Squid proxy, and, to a lesser extent, Pi-hole well – none of the major sites that report IP, DNS, and other connection-related security issues find anything out of the ordinary when I’m either running “just” proxied, or VPN, or VPN+proxy.

You should check yourself hereon:

  1. https://ipleak.net
  2. http://ip-check.info/?lang=en (ironic this site isn’t serving itself over https)
  3. https://doileak.com
  4. https://whatismyip.com
  5. https://browserleaks.com/ip

And, of course, if you just want to see what your pubic IP address is, go hit my service – IPv4.cf

rethinking pi-hole (again)

About 2 years ago, I started running Pi-hole as a DNS resolver and ad-blocker. Then last year, I ditched it.

After seeing a recent post by Troy Hunt, though, I thought it might be worth revisiting..but I needed a better way to control how it worked.

Enter OpenVPN – a service I already run on three endpoints. Here’s what I did:

Install Pi-hole per the usual (curl -sSL https://install.pi-hole.net | bash if you’re feeling brave, curl -sSL https://install.pi-hole.net, inspect, then run, if you’re feeling a little more wary).

This time, though, I set my upstream DNS providers to Cloudflare (1.1.1.1) and Quad9 (9.9.9.9) instead of Freenom and Google.

I also did a two-step install – once with Pi-hole listening on the primary network interface on my OpenVPN endpoint (ie the public IP), and then, once I made sure all was happy, I flipped it to listen on tun0 – the OpenVPN-provided interface. This means Pi-hole can only hear DNS queries if you’re connected to the VPN.

Why the change from how I’d done it before? Two reasons (at least):

First, if you leave Pi-hole open to the world, you can get involved in DNS amplification attacks. That is muy no bueno.

Second, sometimes I don’t care about ads – sometimes I do. I don’t care, for example, most of the time when I’m home. But when I’m traveling or on my iPhone? I care a lot more then.

Bonus – since it’s only “working” when connected to my VPN, it’s super easy to check if a site isn’t working because of Pi-hole, or because it just doesn’t like my browser (hop off the VPN, refresh, and see if all is well that wasn’t when on the VPN).

Changes you need to make to your OpenVPN’s server.conf:


push "dhcp-option DNS 10.8.0.1"

This ensures clients use the OpenVPN server as their DNS resolver. (Note: 10.8.0.1 might not be your OpenVPN parent IP address; adjust as necessary.) Restart OpenVPN after making this change.

My setupVars.conf for Pi-hole:


PIHOLE_INTERFACE=tun0
IPV4_ADDRESS=10.8.0.1/24
IPV6_ADDRESS=
QUERY_LOGGING=true
INSTALL_WEB_SERVER=true
INSTALL_WEB_INTERFACE=true
LIGHTTPD_ENABLED=false
WEBPASSWORD=01f3217c12bcdf8aa0ca08cdf737f99cd68a46dbdc92ce35fd75f39ce2faaf81
DNSMASQ_LISTENING=single
PIHOLE_DNS_1=1.1.1.1
PIHOLE_DNS_2=1.0.0.1
PIHOLE_DNS_3=9.9.9.9
DNS_FQDN_REQUIRED=true
DNS_BOGUS_PRIV=true
DNSSEC=false
CONDITIONAL_FORWARDING=false

I tried getting lighttpd to only listen on on port 443 so I could use Let’s Encrypt’s SSL certs following a handful of tutorials and walk-throughs, but was unsuccessful. So I disabled lighttpd, and only start it by hand if I want to check on my Pi-hole’s status.

Speaking of which, as I write this, here is what the admin console looks like:

admin console screenshot

Hope this helps you.

finally starting to get some good docs amassed

I had a decent library of documentation, templates, hand-offs, slide decks, etc in my pre-Splunk consulting life (technically, I still have them).

It’s nice to be finally getting a decent collection to draw from for my customers in my post-automation consulting life.

you can’t disaggregate

Had a customer recently ask about to disaggregate a Splunk search that had aggregated fields because they export to CSV horribly.

Here’s the thing.

You can’t disaggregate aggregated fields.

And there’s a Good Reason™, too: aggregation, by definition, is a one-way street.

You can’t un-average something.

Average is an aggregation function.

So why would you think you could disaggregate any other Splunk aggregation operation (like values or list)?

You can’t.

And you shouldn’t be able to (as nice as the theoretical use case for it might be).

So what is a body to do when you have a use case for a clean-to-export report that looks as if it had been aggregated, but every field in each row cleanly plunks-out to a single comma-separated value?

Here’s what I did:

{parent search}
| join {some field that'll exist in the subsearch}
[ search {parent search}
 | stats {some stats functions here} ]
| fields - {whatever you don't want}
| sort - {fieldname}

What does that end up doing?

The subsearch is identical to the outer search, plus whatever filtering/where/|stats you might want/need to do.

Using the resultant, filtered set, join on a field you know will be unique [enough].

Then sort however you’d like, and remove whatever fields you don’t want in the final display.


Of course, be sure your subsearch will complete in under 60 seconds and/or return fewer than 10,000 lines (unless you’ve modified your Splunk limits.conf)

stats values vs stats list in splunk

Splunk’s | stats functions are incredibly useful and powerful.

There are two, list and values that look identical…at first blush.

But they are subtly different. Here’s how they’re not the same.

values is an aggregating, uniquifying function.

list is an aggregating, not uniquifying function.

“Whahhuh?!” I hear you ask.

Here’s a prime example – say you’re aggregating on the field IP_addr all user values.

Your search might contain the following chunk: | stats values(user) as user by IP_addr. So for each unique IP address, you will collate a uniquified list of users. Maybe you have the following two IP addresses: 10.10.10.10 & 10.10.20.10. And you have the following user-IP address pairings: kingpin11 10.10.10.10, fergus97 10.10.20.10, gerfluggle 10.10.10.10, kingping11 10.10.10.10, jbobgorry 10.10.10.10.

values will aggregate all of the following users associated with IP addresses: 10.10.10.10 & gerfluggle, jbobgorry, kingping11; 10.10.20.10 & fergus97.

That’s nice – it’s pretty.

But it exports in lousy form if you need to further process the data in another tool (eg Microsoft Excel).

When Splunk exports those results in a CSV, instead of getting a nice, processable file, you get tabs separating what would otherwise be individual items that have all been grouped into one field.

Enter list.

list doesn’t uniquify the values given to it, so while you still only get one line per IP address (since that was our by clause in the snippet above), you get as many IP addresses listed as there are users (in this example).

This makes for an exportable, more processable set of results that a tool like Excel can ingest to perform further analysis with relatively little reformatting needed.

Come back tomorrow for how to get the export to work “out of the box”.

about burning bridges

While you should never be the one to burn the bridge of a relationship, sometimes you need to be aware that the other person had placed dynamite around the joints, soaked the whole shebang in gasoline, and is walking around on top with a lit road flare: and you don’t want to be around when the conflagration begins*.


* Though – sitting far enough away the shards and embers won’t hit you while chant, “burn, baby, burn” can be quite entertaining

manning is doing something similar to my bucket proposal

Manning Publishers has a liveBook offering.

And it allows for the type of mini transactions (through their self-hosted “token” system) that I proposed when writing about how I’d dumped Pi-hole last year.

Quoting from their recent announcement email

Book publishers follow a simple rule: put your content behind a solid paywall. At Manning, we believe you should be able to see before you buy. liveBook search and Manning Tokens make the paywall porous. Our new new timed unlock feature moves the whole wall further back!

That’s pretty dang cool, Manning.