antipaucity

fighting the lack of good ideas

vampires *can* coexist with zombies

I made a mistake 4 years ago.

I said vampires and zombies couldn’t [long] coexist. Because they’d be competing for the same – dwindling – food source: the living (vs them both being undead).

But I was wrong.

If the universe in which they exist is a mash-up of that of Twilight and iZombie … it could work.

The iZombie universe has zombies that can avoid going “full Romero” by maintaining a steady supply of brains – and it’s not much they need to eat to stay “normal”.

The Twilight universe has vampires that can survive on animal blood (or, one presumes, by hitting-up blood banks).

So if you were to have “brain banks” the way you have “blood banks” – I could see it working.

Now we just need some iZombie-Twilight hybrid vambie/zompire creatures running around.

geroge carlin – fear of germs

What we have now is a completely neurotic population obsessed with security and safety and crime and drugs and cleanliness and hygiene and germs… there’s another thing… germs.

Where did this sudden fear of germs come from in this country? Have you noticed this? The media, constantly running stories about all the latest infections – salmonella, e-coli, hanta virus, bird flu – and Americans, they panic easily so now everybody’s running around, scrubbing this and spraying that and overcooking their food and repeatedly washing their hands, trying to avoid all contact with germs. It’s ridiculous and it goes to ridiculous lengths. In prisons, before they give you a lethal injection, they swab your arm with alcohol! It’s true! Yeah! Well, they don’t want you to get an infection! And you could see their point; wouldn’t want some guy to go to hell and be sick! It would take a lot of the sportsmanship out of the whole execution. Fear of germs… why these fucking pussies! You can’t even get a decent hamburger anymore! They cook the shit out of everything now cause everybody’s afraid of food poisoning! Hey, where’s your sense of adventure? Take a fucking chance will you? You know how many people die in this country from food poisoning every year? 9000… that’s all; it’s a minor risk! Take a fucking chance… bunch of goddamn pussies! Besides, what do you think you have an immune system for? It’s for killing germs! But it needs practice… it needs germs to practice on. So listen! If you kill all the germs around you, and live a completely sterile life, then when germs do come along, you’re not gonna be prepared. And never mind ordinary germs, what are you gonna do when some super virus comes along that turns your vital organs into liquid shit? I’ll tell you what you’re gonna do… you’re gonna get sick, you’re gonna die, and you’re gonna deserve it cause you’re fucking weak and you got a fucking weak immune system!

Let me tell you a true story about immunization okay?

When I was a little boy in New York City in the 1940s, we swam in the Hudson River and it was filled with raw sewage okay? We swam in raw sewage! You know… to cool off! And at that time, the big fear was polio; thousands of kids died from polio every year but you know something? In my neighbourhood, no one ever got polio! No one! Ever! You know why? Cause we swam in raw sewage! It strengthened our immune systems! The polio never had a prayer; we were tempered in raw shit! So personally, I never take any special precautions against germs. I don’t shy away from people that sneeze and cough, I don’t wipe off the telephone, I don’t cover the toilet seat, and if I drop food on the floor, I pick it up and eat it! Yes I do. Even if I’m at a sidewalk café! In Calcutta! The poor section! On New Year’s morning during a soccer riot! And you know something? In spite of all that so-called risky behaviour, I never get infections, I don’t get them, I don’t get colds, I don’t get flu, I don’t get headaches, I don’t get upset stomach, you know why? Cause I got a good strong immune system and it gets a lot of practice. My immune system is equipped with the biological equivalent of fully automatic military assault rifles with night vision and laser scopes, and we have recently acquired phosphorous grenades, cluster bombs, and anti-personnel fragmentation mines.

So when my white blood cells are on patrol recon ordering my blood stream seeking out strangers and other undesirables, if they see any, ANY suspicious looking germs of any kind, they don’t fuck around!
They whip out their weapons; they wax the motherfucker and deposit the unlucky fellow directly into my colon! Into my colon! There’s no nonsense, there’s no Miranda warning, there’s none of that “three strikes and you’re out” shit, first defense, BAM… into the colon you go! And speaking of my colon, I want you to know I don’t automatically wash my hands every time I go to the bathroom okay? Can you deal with that? Sometimes I do, sometimes I don’t. You know when I wash my hands? When I shit on them! That’s the only time. And you know how often that happens? Tops, TOPS, 2-3 times a week tops! Maybe a little more frequently over the holidays, you know what I mean? And I’ll tell you something else my well-scrubbed friends… you don’t need to always need to shower every day, did you know that? It’s overkill, unless you work out or work outdoors, or for some reason come in intimate contact with huge amounts of filth and garbage every day, you don’t always need to shower. All you really need to do is to wash the four key areas; armpits, asshole, crotch, and teeth. Got that? Armpits, asshole, crotch, and teeth. In fact, you can save yourself a whole lot of time if you simply use the same brush on all four areas!

https://www.youtube.com/watch?v=X29lF43mUlo https://www.lingq.com/sv/lesson/george-carlin-fear-of-germs-235986

don’t worry about the mules…

Don't worry about the mules...Just load the wagon

tesla’s cybertruck [almost] does two things i’ve said for a long time

“Telsa will add solar power to the Cybertruck to generate 15 miles per day. Fold-out solar wings for the Cybertruck would generate 30 to 40 miles per day. The average daily commute in the US averages 30 miles per day.”

https://www.nextbigfuture.com/2019/11/solar-power-tesla-cybertruck-could-have-free-15-40-mile-daily-commutes.html (https://twitter.com/elonmusk/status/1197889310550216704)

Or remember my comments on SolarCity 3 years ago?

Offering a solar option (or standard) tonneau cover for the bed is an absolute no-brainer. When you own the solar production plant, why wouldn’t you include it?

But more than this, the multi-motor options are a real-world implementation of something I’ve been saying for 20+ years: it makes far more sense to put a motor at (or very near) each wheel or at least axel in an electric vehicle than it does to have one that’s distributing its work everywhere.

Sure, running the cabling to each wheel/axel is a little complicated – but it’s a lot less complicated than drivetrains.

ben thompson missed *a lot* in his microsoft-github article

Ben Thompson is generally spot-on in his analysis of industry goings-on. But he missed a lot in The Cost of Developers this week.

Here’s what he got right about this acquisition:

  • Developers can be quite expensive (though, $7.5B (in equity) is only ~$265 per user (which is pretty cheap))
  • Microsoft is betting that a future of open-source, cloud-based applications that exist independent of platforms will be a large-and-increasing share of the future
  • That there is room in that future for a company to win by offering a superior user experience for developers directly, not simply exerting leverage on them
  • Microsoft is the best possible acquirer for GitHub
  • GitHub, having raised $350 million in venture capital, was not going to make it as an independent entity
  • Purely enterprise-focused companies like IBM or Oracle would be tempted to wring every possible bit of profit out of the company
  • What Microsoft wants is much fuzzier: it wants to be developers’ friend
  • [Microsoft] will be ever more invested in a world with no gatekeepers, where developer tools and clouds win by being better on the merits, not by being able to leverage users

And here’s what he missed and/or got wrong:

  • [Microsoft] is in second place in the cloud. Moreover, that second place is largely predicated on shepherding existing corporate customers to cloud computing; it is not clear why any new company — or developer — would choose Microsoft
  • It is very hard to imagine GitHub ever generating the sort of revenue that justifies this purchase price

Some of the below I commented on Google+ yesterday. The rest is in response to more idiocy & paranoia I’ve seen on some technical community mailing lists (bet you didn’t know those still existed) in the last 24 hours, or in response to specific items in Ben’s essay that are shortsighted, misguided, or incredibly wrong.

  • If you cannot see why new users, developers, and companies would go to Microsoft Azure offerings, you don’t understand what they’re doing
    • AWS is huge – but Azure and Google Cloud Platform (GCP) have huge technical (and economic) advantages
    • Amazon likes to throw new cloud features at the wall like spaghetti to see what sticks; Google and Microsoft have clearly thought-through this whole cloud business, and make incredibly solid business & technical sense to use over AWS in most use cases (the only [occasional] real exception being “but we already use AWS”). Have you not seen the Azure IoT offerings?
  • GitHub has not yet been profitable, and would probably have IPO’d (poorly) in the next year to keep from running out of cash
    • Arguably, GitHub would never become profitable on their own
  • Microsoft has a long history of contributing to OSS projects (most-to-all of which are on GitHub)
    • If they were going to acquire anyone in this space, GitHub is the only one that makes any sense
  • (This was tangentially-mentioned in Ben’s essay by linking to his analysis of the Microsoft-LinkedIn acquisition in 2016.) Alongside the LinkedIn acquisition a couple years back (which has an obvious play for an eventual IDaaS (fully-and-forever integration with Office365 regardless of where you work, everything follows automagically)), offering better integrations with their existing tools (Visual Studio already had git integrations – they should only get better with this acquisition) is a Good Thing™ for devs and end user alike (because making those excellent developer tools even better means they’ll be better whether they’re using GitHub, Bitbucket, GitLab, etc)
  • The more-or-less instantaneous expansion of offered items in the Windows Store (some kind of cloud-based/distributed build-on-demand for software when you want it (and which fork you want)) to “everything” on GitHub is a brilliant possibility
    • In light of Apple’s announcement yesterday about enabling iOS apps to come to macOS over the next releases of iOS and macOS, this should have been at the forefront of most people’s thought processes (after the keynote was done, of course)
    • Through this acquisition, it’s [probably] likely more developers will use Microsoft APIs (.NET, etc) in their projects
  • Echoing Ballmer’s chant, “Developers! Developers! Developers!”, while Microsoft doesn’t really care about Windows anymore (just look at the recent reorg), it is still THE most widespread end-user platform in the world – and bringing millions more developers “into the fold” is genius
    • Even if some small percentage will opt to go elsewhere, most won’t change because, well, change is hard
    • All the developers Microsoft had that weren’t yet using GitHub will have a huge reason to start
  • Microsoft has typically been a buy-don’t-build shop (there are exceptions, but look at the original DOS, PowerPoint, SQL Server, Skype, their failed attempt at Yahoo!, etc): they could have spent 5-10x as much building something “as good as” GitHub, or they could buy it; they opted for the “buy” (via equity, note, and not cash (smart from several business viewpoints (not least of which is the “enforced” interest the GitHub subsidiary (with its new CEO, etc) will have in continuing to ensure it is The place for developers to put their projects (after all, if that drops considerably, the equity aspect GitHub got in the deal is going to drop))))

but, i got them on sale!

Back in August 2008, I had a one-week “quick start” professional services engagement in Nutley New Jersey. It was a supposed to be a super simple week: install HP Server Automation at BT Global.

Another ProServe engineer was onsite to setup HP Network Automation.

Life was gonna be easy-peasy – the only deliverable was to setup and verify a vanilla HPSA installation.

Except, like every Professional Services engagement in history, all was not as it seemed.

First monkey wrench: our primary technical contact / champion was an old-hat Sun Solaris fan (to the near-exclusion of any other OS for any purpose – he even wanted to run SunOS on his laptop).

Second monkey wrench: expanding on the first, out technical contact was super excited about the servers he’d gotten just the weekend before from Sun because they were “on sale”.

It’s time for a short background digression. Because technical intricacies matter.

HP Server Automation was written on Red Hat Linux. It worked great on RHEL. But, due to some [large] customer requests, it also supported running on Sun Solaris.

In 2007, Sun introduced a novel architecture dubbed, “Niagara”, or UltraSPARC T1, which they offered in their T1000 and T2000 series servers. Niagara did several clever things – it offered multiple threads running per core, with as many as 32 simultaneous processes running.

According to AnandTech, the UltraSPARC T1 was a “72 W, 1.2 GHz chip almost 3 times (in SpecWeb2005) as fast as four Xeon cores at 2.8 GHz”.

But there is always a tradeoff. The tradeoff Sun chose for the first CPU in the product line was to share a single FPU (floating point unit) between the integer cores and pipelines. For workloads that mostly involve static / simple data (ie, not much in the way of calculation), they were blazingly fast.

But sharing an FPU brings problems when you need to actually do floating-point math – as cryptographic algorithms and protocols all end up relying upon for gathering entropy for their random value generation processes. Why does this matter? Well, in the case of HPSA, not only is all interprocess, intraserver, and interserver communication secured with HTTPS certificates, but because large swaths are written in Java, each JVM needs to emulate its own FPU – so not only is the single FPU shared between all of the integer cores of the T1 CPU, it is further time-sliced and shared amongst every JRE instance.

At the time, the “standard” reboot time for a server running in an SA Core was generally benchmarked at ~15-20 minutes. That time encompassed all of the following:

  • stop all SA processes (in the proper order)
  • stop Oracle
  • restart the server
  • start Oracle
  • start all SA components (in the proper order)

As you’ll recall from my article on the Sun JRE 1.4.x from 6.5 years ago, there is a Java component (the Twist) that already takes a long time to start as it seeds its entropy pool.

So when it is sharing the single FPU not only between other JVMs, but between every other process which might end up needing it, the total start time is reduced dramatically.

How dramatically? Shutdown alone was taking upwards of 20 minutes. Startup was north of 35 minutes.

That’s right – instead of ~15-20 minutes for a full restart cycle, if you ran HPSA on a T1-powered server, you were looking at ~60+ minutes to restart.

Full restarts, while not incredibly common, are not all that unordinary, either.

At the time, it was not unusual to want to fully restart an HPSA Core 2-3 times per month. And during initial installation and configuration, restarts need to happen 4-5 times in addition to the number of times various components are restarted during installation as configuration files are updated, new processes and services are started, etc.

What should have been about a one-day setup, with 2-3 days of knowledge transfer – turned into nearly 3 days just to install and initially configure the software.

And why were we stuck on this “revolutionary” hardware? Because of what I noted earlier: our main technical contact was a die-hard Solaris fanboi who’d gotten these servers “on sale” (because their Sun rep “liked them”).

How big a “sale” did he get? Well, his sales rep told him they were getting these last-model-year boxes for 20% off list plus an additional 15% off! That sounds pretty good – depending on how you do the math, he was getting somewhere between 32% and 35% off the list price – for a little over $14,000 a piece (they’d bought two servers – one to run Oracle RDBMS (which Oracle themselves recommended not running on the T1 CPU family), and the other to run HPSA proper).

Except his sales rep lied. Flat-out lied. How do I know? Because I used Sun’s own server configurator site and was able to configure two identical servers for just a smidge over $15,000 each – with no discounts. That means they got 7% off list …
tops.

So not only were they running hardware barely discounted off list (and, interestingly, only slightly cheaper (less than $2000) than the next generation T2-powered servers which had a single FPU per core, not per CPU (which still had some performance issues, but at least weren’t dog-vomit slow), but they were running on Solaris – which had always been a second-class citizen when it came to HPSA performance: all things being roughly equal, x86 hardware running RHEL would always smack the pants off SPARC hardware running Solaris under Server Automation.

For kicks, I configured a pair of servers from Dell (because their online server configurator worked a lot better than any other I knew of, and because I wanted to demonstrate that just because SA was an HP product didn’t mean you had to run HP servers), and was able to massively out-spec two x86 servers for less than $14,000 a pop (more CPU cores, more RAM, more storage, etc) and present my findings as part of our write-up of the week.

Also for kicks, I demoed SA running in a 2-CPU, 4GB VM on my laptop rebooting faster than either T1000 server they had purchased could run.

Whats the moral of this story? There’s two (at least):

  1. Always always always find out from your vendor if they have a preferred or suggested architecture before namby-pamby buying hardware from your favorite sales rep, and
  2. Be ever ready and willing to kick your preconceived notions to the sidelines when presented with evidence that they are not merely ill thought out, but out and out, objectively wrong

These are fundamental tenets of automation:

“Too many people try to take new tools and make them fit their current processes, procedures, and policies – rather than seeing what policies, procedures, and processes are either made redundant by the new tools, or can be improved, shortened, or – wait for it – automated!”

You must always be reviewing and rethinking your preconceived notions, what policies you’re currently following, etc. As I heard recently, you need to reverse your benchmarks: don’t ask, “why are we doing X?”; ask, “what would happen if we didn’t do X?”

That was a question never asked by anyone prior to our arrival to implement what sales had sold them.

what is “plan b” for iot security?

Schneier has a recent article on security concerns for IoT (internet of things) devices – IoT Cybersecurity: What’s Plan B?

We can try to shop our ideals and demand more security, but companies don’t compete on IoT safety — and we security experts aren’t a large enough market force to make a difference.

We need a Plan B, although I’m not sure what that is. Comment if you have any ideas.

There are loads of great comments on the post.

Here’s the start of some of my thoughts:

There are a host of avenues which need to be gone down and addressed regarding device security in general, and IoT security in particular.

Any certification program could be good .. right up until the vendor goes out of business. Or ends the product line. Or ends formal support. Unless we go to a lease model for everything, you’re going to have unsupported/unsupportable devices out there.

We can’t have patches ad infinitum because it’s not practical: every vendor EOLs products (from OSes to firearms to DB servers to cars, etc).

A few things which would be good:

  • safe/secure by default from the vendor – you have to manually de-safe it to use it (like a rifle which only becomes usable/dangerous/operable when you load a cartridge and put the safety off)
  • well-known, highly-publicized support lifecycles (caveating the vendor going out of business)
  • related to the above, notifications from the device as it nears end of support
  • notifications from the device as well as the vendor that updates/patches are available
  • liability regulations – and an associated insurance structure – affecting businesses which choose to offer IoT devices across a few levels:
    1. here it is :: you deal with it || no support, no insurance, whatever risk is there is your problem
    2. patches / updates for 1 year || basic insurance / guarantee of operation through supported period, as long as you’re patched up to date
    3. patches / updates for 3 years ||
    4. patches / updates for 5 years || first-level business offering || insurance against hacks / flaws that have been disclosed for more than 90 days so long as you have patched
    5. patches / updates for 10 years || enterprise / long-term support || “big” insurance coverage (up to a year, so long as you’re yp-to-date) || proactive notifications from the vendor to customers regarding flaws, patches, etc

There are probably other things which need to be considered.

But there’s my start.