antipaucity

fighting the lack of good ideas

never be the one to burn the bridge

But always carry a can of gasoline and some matches – because sometimes you do need to be the one to break the relationship.

a fairly comprehensive squid configuration for proxying all the http things

After combing through the docs and several howtos on deploying the Squid proxy server – none of which really did everything I wanted, of course – I’ve finally gotten to the format below.

Installing Squid is easy-peasy – it’s in the standard package repos for the major platforms (CentOS/Fedora/RHEL, Ubuntu/Debian, etc) – so just run yum install squid or apt install squid on your platform of choice (my exact install command on Ubuntu 18.04 was apt -y install squid net-tools apache2-utils).

What I wanted was an “open” (password-protected) proxy server with disk-based caching enabled that would cover all of the ports I could reasonably expect to run into.

Why “open”? Because I want to be able to turn it on and off on various mobile devices which may (or may not) have stable-ish public IPs.

Here is the config as I have it deployed, minus sensitive/site-specific items (usernames, passwords, port, etc), of course:


A working /etc/squid/squid.conf

acl SSL_ports port 443
acl SSL_ports port 8443
acl Safe_ports port 80		# http
acl Safe_ports port 21		# ftp
acl Safe_ports port 443		# https
acl Safe_ports port 1025-65535	# unregistered ports
acl Safe_ports port 280		# http-mgmt
acl Safe_ports port 488		# gss-http
acl Safe_ports port 777		# multiling http
acl Safe_ports port 8080
acl CONNECT method CONNECT

auth_param basic program /usr/lib/squid/basic_ncsa_auth /etc/squid/.htpasswd
auth_param basic children 15
# after "realm", put some descriptive, clever, or otherwise-identifying string that will appear when you login
auth_param basic realm Insert Incredibly Witty Title Here
auth_param basic credentialsttl 5 hours
acl password proxy_auth REQUIRED
http_access allow password

# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports

# Only allow cachemgr access from localhost
http_access allow localhost manager
http_access deny manager

#http_access allow localnet
http_access allow localhost

# And finally deny all other access to this proxy
# commented-out to allow "open" use (ie password authenticated)
#http_access deny all

# Squid normally listens to port 3128
# change this line if you want it to listen on something other port
# http_port 

# Uncomment and adjust the following to add a disk cache directory.
#cache_dir ufs /var/spool/squid 100 16 256
# format is      
cache_dir ufs /tmp/squid-cache 768 16 256

# Leave coredumps in the first cache dir
coredump_dir /var/spool/squid

# Add any of your own refresh_pattern entries above these.
#
refresh_pattern ^ftp:		1440	20%	10080
refresh_pattern ^gopher:	1440	0%	1440
refresh_pattern -i (/cgi-bin/|\?) 0	0%	0
refresh_pattern (Release|Packages(.gz)*)$      0       20%     2880
refresh_pattern .		0	20%	4320

via off
forwarded_for off

request_header_access Allow allow all 
request_header_access Authorization allow all 
request_header_access WWW-Authenticate allow all 
request_header_access Proxy-Authorization allow all 
request_header_access Proxy-Authenticate allow all 
request_header_access Cache-Control allow all 
request_header_access Content-Encoding allow all 
request_header_access Content-Length allow all 
request_header_access Content-Type allow all 
request_header_access Date allow all 
request_header_access Expires allow all 
request_header_access Host allow all 
request_header_access If-Modified-Since allow all 
request_header_access Last-Modified allow all 
request_header_access Location allow all 
request_header_access Pragma allow all 
request_header_access Accept allow all 
request_header_access Accept-Charset allow all 
request_header_access Accept-Encoding allow all 
request_header_access Accept-Language allow all 
request_header_access Content-Language allow all 
request_header_access Mime-Version allow all 
request_header_access Retry-After allow all 
request_header_access Title allow all 
request_header_access Connection allow all 
request_header_access Proxy-Connection allow all 
request_header_access User-Agent allow all 
request_header_access Cookie allow all 
request_header_access All deny allroot

Finalize your Squid server system settings

Things you need to do once you do the above (prepend sudo to each command below if youre not logged-in as root:

  1. Enable Squid to start at boot: systemctl enable squid
  2. Create the cache directories: squid -z
  3. Create a DNS entry for your proxy host (if you want it usable outside your home network, and don’t want to reference it by IP address only)
  4. Create the authentication file (located at /etc/squid/.htpasswd in this example): touch /etc/squid/.htpasswd
  5. Create a username and password: htpasswd -c /etc/squid/.htpasswd (don’t forget this username/password combination!)
  6. Start Squid: systemctl start squid

Configure your browser to use your new proxy

Here’s where you need to go and what you need to change in Firefox:

  1. Navigate to about:preferences
  2. Click on Settings… under Network Proxy
  3. Enter your proxy host details:

To verify your proxy settings are correct, visit IPv4.cf with both the proxy off, and then again with it on.

If your reported IP address changes between visits (with the second check being your Squid server IP) – congratulations! You have successfully deployed a Squid proxy caching server.

6 movies

I want you to watch these 6 movies (in this order):

Watch them with a notebook and pen or pencil handy (yes: use physical writing and recording tools; don’t use your laptop, tablet, or phone).

Write things you see that jump out at you about the story – how it is told, what is explicitly stated, what is implicitly hinted-at, what is alluded-to, etc.

Start with the very first thing you see after the title credits finish.

I’ll give you a hint, and highlight a small handful of items from the first ~15 minutes of The Wizard of Oz:

  1. Everything is gray (well, sepia)
  2. Dorothy is whiny
  3. Dorothy is obsessed with her dog
    1. Why is Toto going anywhere with Dorothy off the farm grounds?
    2. Especially, why is Dorothy taking Toto near Ms Gulch’s cats?
  4. Dorothy is an orphan
    1. Auntie Em & and Uncle Henry are on the farm, but
    2. no parents anywhere in sight
  5. When Dorothy wakes up in Oz, everything is in eye-popping, brilliant Technicolor

Tell me what you find – I’m intensely intrigued to see how your lists compare to mine.

ben thompson missed *a lot* in his microsoft-github article

Ben Thompson is generally spot-on in his analysis of industry goings-on. But he missed a lot in The Cost of Developers this week.

Here’s what he got right about this acquisition:

  • Developers can be quite expensive (though, $7.5B (in equity) is only ~$265 per user (which is pretty cheap))
  • Microsoft is betting that a future of open-source, cloud-based applications that exist independent of platforms will be a large-and-increasing share of the future
  • That there is room in that future for a company to win by offering a superior user experience for developers directly, not simply exerting leverage on them
  • Microsoft is the best possible acquirer for GitHub
  • GitHub, having raised $350 million in venture capital, was not going to make it as an independent entity
  • Purely enterprise-focused companies like IBM or Oracle would be tempted to wring every possible bit of profit out of the company
  • What Microsoft wants is much fuzzier: it wants to be developers’ friend
  • [Microsoft] will be ever more invested in a world with no gatekeepers, where developer tools and clouds win by being better on the merits, not by being able to leverage users

And here’s what he missed and/or got wrong:

  • [Microsoft] is in second place in the cloud. Moreover, that second place is largely predicated on shepherding existing corporate customers to cloud computing; it is not clear why any new company — or developer — would choose Microsoft
  • It is very hard to imagine GitHub ever generating the sort of revenue that justifies this purchase price

Some of the below I commented on Google+ yesterday. The rest is in response to more idiocy & paranoia I’ve seen on some technical community mailing lists (bet you didn’t know those still existed) in the last 24 hours, or in response to specific items in Ben’s essay that are shortsighted, misguided, or incredibly wrong.

  • If you cannot see why new users, developers, and companies would go to Microsoft Azure offerings, you don’t understand what they’re doing
    • AWS is huge – but Azure and Google Cloud Platform (GCP) have huge technical (and economic) advantages
    • Amazon likes to throw new cloud features at the wall like spaghetti to see what sticks; Google and Microsoft have clearly thought-through this whole cloud business, and make incredibly solid business & technical sense to use over AWS in most use cases (the only [occasional] real exception being “but we already use AWS”). Have you not seen the Azure IoT offerings?
  • GitHub has not yet been profitable, and would probably have IPO’d (poorly) in the next year to keep from running out of cash
    • Arguably, GitHub would never become profitable on their own
  • Microsoft has a long history of contributing to OSS projects (most-to-all of which are on GitHub)
    • If they were going to acquire anyone in this space, GitHub is the only one that makes any sense
  • (This was tangentially-mentioned in Ben’s essay by linking to his analysis of the Microsoft-LinkedIn acquisition in 2016.) Alongside the LinkedIn acquisition a couple years back (which has an obvious play for an eventual IDaaS (fully-and-forever integration with Office365 regardless of where you work, everything follows automagically)), offering better integrations with their existing tools (Visual Studio already had git integrations – they should only get better with this acquisition) is a Good Thing™ for devs and end user alike (because making those excellent developer tools even better means they’ll be better whether they’re using GitHub, Bitbucket, GitLab, etc)
  • The more-or-less instantaneous expansion of offered items in the Windows Store (some kind of cloud-based/distributed build-on-demand for software when you want it (and which fork you want)) to “everything” on GitHub is a brilliant possibility
    • In light of Apple’s announcement yesterday about enabling iOS apps to come to macOS over the next releases of iOS and macOS, this should have been at the forefront of most people’s thought processes (after the keynote was done, of course)
    • Through this acquisition, it’s [probably] likely more developers will use Microsoft APIs (.NET, etc) in their projects
  • Echoing Ballmer’s chant, “Developers! Developers! Developers!”, while Microsoft doesn’t really care about Windows anymore (just look at the recent reorg), it is still THE most widespread end-user platform in the world – and bringing millions more developers “into the fold” is genius
    • Even if some small percentage will opt to go elsewhere, most won’t change because, well, change is hard
    • All the developers Microsoft had that weren’t yet using GitHub will have a huge reason to start
  • Microsoft has typically been a buy-don’t-build shop (there are exceptions, but look at the original DOS, PowerPoint, SQL Server, Skype, their failed attempt at Yahoo!, etc): they could have spent 5-10x as much building something “as good as” GitHub, or they could buy it; they opted for the “buy” (via equity, note, and not cash (smart from several business viewpoints (not least of which is the “enforced” interest the GitHub subsidiary (with its new CEO, etc) will have in continuing to ensure it is The place for developers to put their projects (after all, if that drops considerably, the equity aspect GitHub got in the deal is going to drop))))

remembering to forget

As a society, we have forgotten how to forget. We are addicted to storing everything forever. Why?

New Atlas had an article recently on the demise of skyscrapers in favor of new ones which starts off,

The Great Pyramid of Giza has stood at a height of around 460 feet for 4,500 years, but these days we are ripping down tall structures without even batting an eyelid. A new study looking at the average lifespan of demolished skyscrapers illustrates just how quick we are to pull the trigger, raising the question of how we could reimagine tower design so that they last centuries rather than decades.

I ask, first: why should we design things to “last centuries rather than decades”?

Yes, the future impact of decisions made today must be carefully evaluated (“concrete cannot be recycled, and most of the tallest buildings in the world use concrete for their main structural system”).

But designing for “centuries” is not the answer.

Or, at least, it’s not the answer.

It’s not a panacea – though there may be some occasional use cases for expecting a structure to last generationally.

But since time immemorial, buildings have mostly been built with at least an unconscious knowledge they would not exist “forever”.

Sure, there are interesting historical sites (such as these now-destroyed Mayan ruins) that we might have liked to keep. But reuse of old materials is part and parcel of civilizational progress.

volvo moving towards waze-like functionality

Shared a TechCrunch story recently in a G+ group I’m in on Volvo debuting a new service to upload live traffic data from its vehicles to be sent to other Volvos so they can avoid problem areas.

Or…a self-built and -hosted Waze.

You may recall that I wrote some about these kinds of things starting back in 2010.

Why, exactly, Volvo thinks it not only should do this, but expects its customers to want the manufacturer to be doing this is something I don’t understand right now.

Sure, it’s an interesting technical challenge – but it’s not exactly novel…except in the sense of the vehicle doing this for you instead of you doing it via some mobile app or other.

If this is something drivers must opt-in to use, that’s OK. If you can opt-out, it’s OK, too.

But if it’s not optional – this is a major privacy concern: one I’m sure many people would do without even stopping to take a breath, but a major privacy concern nonetheless.

don’t use symlinks unless you *know* you can

I first ran into this on Solaris in the context of [then] Opsware SAS (then HP SA, now owned by Microfocus). Bind mounts might be OK … so unless the tarball has symlinks included, don’t use them – they get traversed differently than “real” directories.

In short, when directory traversals are done, sometimes it looks at the permissions bits and if the first character is not a d (for a symlink, it’s always an l), many processes can fail.

Symlinking files is [possibly] a different story: though permissions are usually wonky on symlinks (most often lrwxrwxrwx vs -rw-r--r--, for example), since you cannot traverse into a file (whereas you can into a directory), it’s generally ok

Also – sometimes when directory listings are pulled, the symlink is fully-dereferenced, and something that appears to be in, say, $SPLUNK_HOME/etc/deployment_apps but is really in, say, /some/other/place, there are some times when Splunk will decide not to deploy it, because it’s not where it “belongs”.

Also – checksums can be computed on the symlink and not the actual file, in some (perhaps all) instances: so if, for example, you have the same outputs.conf in several apps by way of symlink, and you change it in one, the checksum for all the others may (and typically do) not get updated … so you can be left in an inconsistent state for your configs (because not all locations that should’ve received the updated outputs.conf have received it, since they’re symlinks and not a real file, and the checksum may not update on those particular apps).

Moral of the story?

Unless you really know what you’re doing, never use symlinks with Splunk.