antipaucity

fighting the lack of good ideas

splunk: match a field’s value in another field

Had a Splunk use-case present itself today on needing to determine if the value of a field was found in another – specifically, it’s about deciding if a lookup table’s category name for a network endpoint is “the same” as the dest_category assigned by a Forescout CounterACT appliance.

We have “customer validated” (and we all know how reliable that kind of data can be… (the customer is always wrong) names for network endpoints.

These should be “identical” to the dest_category field assigned by CounterACT … but, as we all know, “should” is a funny word.

What I tried (that does not work) was to get like() to work:

| eval similar=if(like(A,'%B%') OR like(B,'%A%'), "yes", "no")

I tried a slew of variations around the theme of trying to get the value of the field to be in the match portion of the like().

What I ended-up doing (that does work) is this:

| eval similar=if((match(A,B) OR match(B,A)), "yes", "no")

That uses the value of the second field listed to be the regular expression clause of the match() function.

Things you should do ahead of time:

  • match case between the fields (I did upper() .. lower() would work as well)
  • remove “unnecessary” characters – in my case, I yoinked all non-word characters with this replace() eval: | eval A=upper(replace(A,"\W",""))
  • know that there are limitations to this comparison method
    • “BOB” will ‘similar’ match to “BO”, but not “B OB” (hence removing non-word characters before the match())
    • “BOB” is not ‘similar’ to “ROB” – even though, in the vernacular, both might be an acceptible shortening of “ROBERT”
  • if you need more complex ‘similar’ matching, checkout the JellyFisher add-on on Splunkbase

Thanks, also, to @trex and @The_Tick on the Splunk Usergroups Slack #search-help channel for working me towards a solution (even though what they suggested was not the direction I ended up going).

vampires *can* coexist with zombies

I made a mistake 4 years ago.

I said vampires and zombies couldn’t [long] coexist. Because they’d be competing for the same – dwindling – food source: the living (vs them both being undead).

But I was wrong.

If the universe in which they exist is a mash-up of that of Twilight and iZombie … it could work.

The iZombie universe has zombies that can avoid going “full Romero” by maintaining a steady supply of brains – and it’s not much they need to eat to stay “normal”.

The Twilight universe has vampires that can survive on animal blood (or, one presumes, by hitting-up blood banks).

So if you were to have “brain banks” the way you have “blood banks” – I could see it working.

Now we just need some iZombie-Twilight hybrid vambie/zompire creatures running around.

how-to timechart [possibly] better than timechart in splunk

I recently had cause to do an extensive trellised timechart for a dashboard at $CUSTOMER in Splunk.

They have a couple hundred locations reporting networked devices.

I needed to report on how many devices they’ve reported every day over the last 90 days (I would have liked to go back further…but retention is only 90 days on this data).

My initial instinct was to do this:

index=ndx sourcetype=srctp site=* ip=* earliest=-90d
| timechart limit=0 span=1d dc(ip) by site

Except…that takes well over an hour to run – so the job gets terminated at ~60 minutes.

What possible other approaches could be made?

🤔

Well.

Here are a few that I thought about:

  1. Use multisearch, and group 9 10d searches together.
    • I’ve done things like this before with good success. But it’s … ugly. Very, very ugly.
    • You can almost always accomplish what you want via stats, too – but it can be tricky.
  2. Pre-populate a lookup table with older data (a la option 1 above, but done “by hand”), and then just append “more recent” data onto the table in the future.
    • This would give the advantage of getting a longer history going forward
    • Ensuring “cleanliness” of the table would require some maintenance scheduled searches/reports … but it’s doable
  3. Something else … that “happens” to work like a timechart – but runs in an acceptable time frame.
  4. Try binning _time
    1. Tried – didn’t work 🤨

So what did I do?

I asked for ideas.

If you’re regularly (or irregularly) using Splunk, you should join the Splunk Usergroups Slack.

Go join it now, if you’re not on it already.

Don’t worry – this blog post will be here when you get back.

You’ve joined? Good good. Look me up – I’m @Warren Myers. And I love to help when I can 🤠.

I asked in #search-help.

And within a couple minutes, had some ideas from somebody to use the “hidden field” date_day and do a | stats dc(ip) by date_day site. Unfortunately, this data source is JSON that comes-in via the HEC.

Poo.

Lo and behold!

I can “fake” date_day by using strftime!

Specifically, here’s the eval command:

| eval date=strftime(_time,"%Y-%m-%d")

This converts from the hidden _time field (in Unix epoch format) to yyyy-mm-dd.

This is the 🔑!

What does this line do? It lets me stats-out by day and site (just like timechart does … but it runs way faster (Why? I Don’t Know. He’s on third. And I Don’t Give a Darn! (Oh! That’s our shortstop!)).

How much faster?

At least twice as fast! It takes ~2200 seconds to complete, but given that the timechart form was being nuked at 3600 seconds, and it was only about 70% done … this is better!

The final form for the search:

index=ndx sourcetype=srctp site=* ip=* earliest=-90d@ latest=-1d@
| table site ip _time
| eval date=strftime(_time,"%Y-%m-%d")
| stats dc(ip) as inventory by date site

I’ve got this in a daily-scheduled Report that I then draw-into Dashboard(s) as needed (no point in running more often, since it’s summary data that only “changes” (at most) once a day).

Hope this helps somebody – please leave a comment if it helps you!

calvin coolidge on watching your words

three keys to success, from travis chappell

invest your time wisely
Invest Your Time Wisely

don’t worry about the mules…

Don't worry about the mules...Just load the wagon

tesla’s cybertruck [almost] does two things i’ve said for a long time

“Telsa will add solar power to the Cybertruck to generate 15 miles per day. Fold-out solar wings for the Cybertruck would generate 30 to 40 miles per day. The average daily commute in the US averages 30 miles per day.”

https://www.nextbigfuture.com/2019/11/solar-power-tesla-cybertruck-could-have-free-15-40-mile-daily-commutes.html (https://twitter.com/elonmusk/status/1197889310550216704)

Or remember my comments on SolarCity 3 years ago?

Offering a solar option (or standard) tonneau cover for the bed is an absolute no-brainer. When you own the solar production plant, why wouldn’t you include it?

But more than this, the multi-motor options are a real-world implementation of something I’ve been saying for 20+ years: it makes far more sense to put a motor at (or very near) each wheel or at least axel in an electric vehicle than it does to have one that’s distributing its work everywhere.

Sure, running the cabling to each wheel/axel is a little complicated – but it’s a lot less complicated than drivetrains.