antipaucity

fighting the lack of good ideas

you can’t disaggregate

Had a customer recently ask about to disaggregate a Splunk search that had aggregated fields because they export to CSV horribly.

Here’s the thing.

You can’t disaggregate aggregated fields.

And there’s a Good Reason™, too: aggregation, by definition, is a one-way street.

You can’t un-average something.

Average is an aggregation function.

So why would you think you could disaggregate any other Splunk aggregation operation (like values or list)?

You can’t.

And you shouldn’t be able to (as nice as the theoretical use case for it might be).

So what is a body to do when you have a use case for a clean-to-export report that looks as if it had been aggregated, but every field in each row cleanly plunks-out to a single comma-separated value?

Here’s what I did:

{parent search}
| join {some field that'll exist in the subsearch}
[ search {parent search}
 | stats {some stats functions here} ]
| fields - {whatever you don't want}
| sort - {fieldname}

What does that end up doing?

The subsearch is identical to the outer search, plus whatever filtering/where/|stats you might want/need to do.

Using the resultant, filtered set, join on a field you know will be unique [enough].

Then sort however you’d like, and remove whatever fields you don’t want in the final display.


Of course, be sure your subsearch will complete in under 60 seconds and/or return fewer than 10,000 lines (unless you’ve modified your Splunk limits.conf)

about burning bridges

While you should never be the one to burn the bridge of a relationship, sometimes you need to be aware that the other person had placed dynamite around the joints, soaked the whole shebang in gasoline, and is walking around on top with a lit road flare: and you don’t want to be around when the conflagration begins*.


* Though – sitting far enough away the shards and embers won’t hit you while chant, “burn, baby, burn” can be quite entertaining

never be the one to burn the bridge

But always carry a can of gasoline and some matches – because sometimes you do need to be the one to break the relationship.

ben thompson missed *a lot* in his microsoft-github article

Ben Thompson is generally spot-on in his analysis of industry goings-on. But he missed a lot in The Cost of Developers this week.

Here’s what he got right about this acquisition:

  • Developers can be quite expensive (though, $7.5B (in equity) is only ~$265 per user (which is pretty cheap))
  • Microsoft is betting that a future of open-source, cloud-based applications that exist independent of platforms will be a large-and-increasing share of the future
  • That there is room in that future for a company to win by offering a superior user experience for developers directly, not simply exerting leverage on them
  • Microsoft is the best possible acquirer for GitHub
  • GitHub, having raised $350 million in venture capital, was not going to make it as an independent entity
  • Purely enterprise-focused companies like IBM or Oracle would be tempted to wring every possible bit of profit out of the company
  • What Microsoft wants is much fuzzier: it wants to be developers’ friend
  • [Microsoft] will be ever more invested in a world with no gatekeepers, where developer tools and clouds win by being better on the merits, not by being able to leverage users

And here’s what he missed and/or got wrong:

  • [Microsoft] is in second place in the cloud. Moreover, that second place is largely predicated on shepherding existing corporate customers to cloud computing; it is not clear why any new company — or developer — would choose Microsoft
  • It is very hard to imagine GitHub ever generating the sort of revenue that justifies this purchase price

Some of the below I commented on Google+ yesterday. The rest is in response to more idiocy & paranoia I’ve seen on some technical community mailing lists (bet you didn’t know those still existed) in the last 24 hours, or in response to specific items in Ben’s essay that are shortsighted, misguided, or incredibly wrong.

  • If you cannot see why new users, developers, and companies would go to Microsoft Azure offerings, you don’t understand what they’re doing
    • AWS is huge – but Azure and Google Cloud Platform (GCP) have huge technical (and economic) advantages
    • Amazon likes to throw new cloud features at the wall like spaghetti to see what sticks; Google and Microsoft have clearly thought-through this whole cloud business, and make incredibly solid business & technical sense to use over AWS in most use cases (the only [occasional] real exception being “but we already use AWS”). Have you not seen the Azure IoT offerings?
  • GitHub has not yet been profitable, and would probably have IPO’d (poorly) in the next year to keep from running out of cash
    • Arguably, GitHub would never become profitable on their own
  • Microsoft has a long history of contributing to OSS projects (most-to-all of which are on GitHub)
    • If they were going to acquire anyone in this space, GitHub is the only one that makes any sense
  • (This was tangentially-mentioned in Ben’s essay by linking to his analysis of the Microsoft-LinkedIn acquisition in 2016.) Alongside the LinkedIn acquisition a couple years back (which has an obvious play for an eventual IDaaS (fully-and-forever integration with Office365 regardless of where you work, everything follows automagically)), offering better integrations with their existing tools (Visual Studio already had git integrations – they should only get better with this acquisition) is a Good Thing™ for devs and end user alike (because making those excellent developer tools even better means they’ll be better whether they’re using GitHub, Bitbucket, GitLab, etc)
  • The more-or-less instantaneous expansion of offered items in the Windows Store (some kind of cloud-based/distributed build-on-demand for software when you want it (and which fork you want)) to “everything” on GitHub is a brilliant possibility
    • In light of Apple’s announcement yesterday about enabling iOS apps to come to macOS over the next releases of iOS and macOS, this should have been at the forefront of most people’s thought processes (after the keynote was done, of course)
    • Through this acquisition, it’s [probably] likely more developers will use Microsoft APIs (.NET, etc) in their projects
  • Echoing Ballmer’s chant, “Developers! Developers! Developers!”, while Microsoft doesn’t really care about Windows anymore (just look at the recent reorg), it is still THE most widespread end-user platform in the world – and bringing millions more developers “into the fold” is genius
    • Even if some small percentage will opt to go elsewhere, most won’t change because, well, change is hard
    • All the developers Microsoft had that weren’t yet using GitHub will have a huge reason to start
  • Microsoft has typically been a buy-don’t-build shop (there are exceptions, but look at the original DOS, PowerPoint, SQL Server, Skype, their failed attempt at Yahoo!, etc): they could have spent 5-10x as much building something “as good as” GitHub, or they could buy it; they opted for the “buy” (via equity, note, and not cash (smart from several business viewpoints (not least of which is the “enforced” interest the GitHub subsidiary (with its new CEO, etc) will have in continuing to ensure it is The place for developers to put their projects (after all, if that drops considerably, the equity aspect GitHub got in the deal is going to drop))))

on internet sales tax

The debate is raging again as the Supreme Court of the United States is getting ready to make a decision on collecting sales tax for online sales.

I’ve read as many viewpoints from supporting and detracting from requiring businesses to collect sales tax from their customers.

And my [current] view is that all businesses conducting business online should collect the sales tax you would have paid if you went in person.

Company in Oregon? No sales tax. Company in Kentucky? Sales tax.

Don’t collect it for whereever the buyer happens to be: collect it based on where the seller is.

Simple.

Straighforward.

And is something the merchant is already setup to do.


more thoughts on `|stats` vs `|dedup` in splunk

Yesterday I wrote-up a neat little find in Splunk wherein running stats count by ... is substantially faster than running dedup ....

After some further reflection over dinner, I figured out the major portion of why this is – and I feel a little dumb for not having thought of it before. (A coworker added some more context, but it’s a smaller reason of why one is faster then the other.)

The major reason stats count by... is faster than dedup ... is that stats can hand-off the counting process to something else (though, even if it doesn’t, incrementing a hashtable entry by 1 every time you encounter an instance isn’t terribly computationally complex) and keep going.

In contrast, dedup must compare every individual returned event’s field that matches what you’re trying to dedup to it’s growing list of unique entries for that field.

In the particular case I was seeing yesterday, that means that every single event in the list of 4,000,000 events returned by the search has to be compared one at a time to a list (that I know is going to top out at about 11,000). To use Big-O Notation, this is an O(n*m) operation (bordering on O(n2))!

That initial list of length m fills pretty quickly (it is, after all, only going to get to ~11,000 total entries (in this case)), but as it grows to its max, it gets progressively harder and hard to check whether or not the next event has already been dedup’d.

At ~750,000 events returned (roughly 1/5 my total), the list is unique field values was 98% complete – yet there were still ~3.2 million events left to go (to find just 2% more unique field values).

Those last 3.2 million events each need to check against the list of >10,500 entries – which means, roughly, 16,8 billion comparisons still need to be made!

(Because linear searching finds what it’s looking for on average by the time it has traversed half the list. If the list is being created in a slighly more efficient manner (say a heap or [balanced] binary search tree), it will still take ~43 million comparisons (3.2 million * log2(11,000)).)

Compare this to the relative complexity of using |stats count by ... – it still has to run through all 4 million events, but all it is doing is adding one to the list for every value that shows up in that particular field – IOW, it “only” has to do a total of 4 million [simple] things (because it does need to look at every event returned). dedup at a minimum is going to do ~54 million comparison (and probably a lot more – given it doesn’t merely take 13x the time to run, but closer to 25x).

The secondary contributing factor – important, but not as much a factor as what I covered above – is that dedup must process the whole event, whereas stats chucks everything that isn’t part of what it’s counting (so if an event is 1kb in size, dedup has to return the whole kb, while stats is only looking at maybe 1/10 the total (if you include a coupld extra fields)).

Another neat aspect of using |stats is that it creates a table for you – if you’re running |dedup, you then have to |table ... to get the fields you want displayed how you want.

And adding |table adds to the run time.

So there you have it – turns out those CompSci 201 classes do come in handy 18 years later 🤓

document what didn’t work

In a recent episode of Paul’s Security Weekly, an off-hand comment was made about documentation: you shouldn’t merely document what to do, nor even why, but also what you tried that didn’t work (ie, augment the status quo).

The upshot being, to save whomever comes to this note next (especially if it turns out to be yourselfeffort you spent that was in vain.

This is similar to a famous quote attributed to Edison,

I have not failed. I’ve just found 10,000 ways that won’t work.

In light of my recommended, preferred practice and policy of “terse verbosity“, I would strongly suggest not placing the “doesn’t work” in-line, typically. Instead, put footnotes, an appendix, etc. But always

explain everything you did, but use bullet points if possible, rather than prose form

Loads of other goodies in that episode, too – but this one jumped-out as applicable to everyone.