antipaucity

fighting the lack of good ideas

sometimes i’m off

It took Apple 5.5 (or 6, if you count last week as really hitting it) years to introduce what I called the MacBook Flex back in 2015.

With the 13″ MacBook Pro available in an M1-powered edition (which is so much better than the top-end MBP from 2019…it’s not even funny), and now a 5G-enabled iPad Pro running on the M1 … it’s here.

think-read-speak

deeply-broadly-carefully

think-read-speak
deeply-broadly-carefully

Please feel free to use/share/copy/adapt this image

remembering sqrt

A couple weeks ago some folks in the splunk-usergroups.slack helped me using accum and calculating with a modulus to make a grid menu from a list.

My original search had been along the lines of:

| inputlookup mylookup
| stats count by type
| fields - count
| transpose
| fields - column

Which was great … until my list grew more than about 12 entries (and scrolling became a pain).

A couple folks here helped me flip it to this format:

| Inputlokup mylookup
| stats count by type
| eval num=1
| accum num
| eval num=num-1
| eval mod=floor(num/12)
| eval type.{mod}=type
| fields - mod num type count
| stats list(*) as *

Which works awesomely.

Unless the modulus value (currently 12) gets too small (if the total list grows to more than modval^2 .. each individual box is no longer in alphabetical order (and then alpha from box to box).

So I made this modification so that regardless of the size of the list, the grid will automanage itself:

| inputlookup mylookup
| stats count by type
| eventstats count as _tot
| eval modval=ceil(sqrt(_tot))
| eval num=1
| accum num
| eval num=num-1
| eval mod-floor(num/modval)
| eval type.{mod}=type
| fields - modval mod num type count
| stats list(*) as *

Dunno if that’ll help anyone else, but wanted to share-back that self-managing aspect I added in case anyone was interested :slightly_smiling_face:

4 places to test your internet connectivty

a poor user’s guide to accelerating data models in splunk

Data Models are one of the major underpinnings of Splunk’s power and flexibility.

They’re the only way to benefit from the powerful pivot command, for example.

They underlie Splunk Enterprise Security (probably the biggest “non-core” use of Splunk amongst all their customers).

Key to achieving peak performance from Splunk Data Models, though, is that they be “accelerated“.

Unfortunately (or, fortunately, if you’re administering the environment, and your users are mostly casually-experienced with Splunk), the ability to accelerate a Data Model is controlled by the extensive RBACs available in Splunk.

So what is a poor user to do if they want their Data Model to be faster (or even “complete”) when using it to power pivot tables, visualizations, etc?

This is something I’ve run into with customers who don’t want to give me higher-level permissions in their environment.

And it’s something you’re likely to run into – if you’re not a “privileged user”.

Let’s say you have a Data Model that’s looking at firewall logs (cisco ios syslog). Say you want to look at these logs going back over days or weeks, and display results in a pivot table.

If you’re in an environment like I was working in recently, where looking at even 100 hours (slightly over 4 days) worth of these events can take 6 or 8 or even 10 minutes to plow through before your pivot can start working (and, therefore, before the dashboard you’re trying to review is fully-loaded).

Oh!

One more thing.

That search that’s powering your Data Model? Sometimes (for unknown reasons (that I don’t have the time to fully ferret-out)), it will fail to return “complete” results (vs running it in Search).

So what is a poor user to do?

Here’s what I’ve done a few times.

I schedule the search to run every X often (maybe every 4 or 12 hours) via a scheduled Report.

And I have the search do an outputlookup to a CSV file.

Then in my Data Model, instead of running the “raw search”, I’ll do the following:

| inputlookup <name-of-generated-csv>

That’s it.

That’s my secret.

When your permissions won’t let you do “what you want” … pretend you’re Life in Ian Malcom‘s mind – find a way!

libraries should be print-on-demand centers – especially for old/unusual works

Want to reinvigorate old texts and library patronage? Turn libraries into print-on-demand book “publishers” for works in the public domain and/or which aren’t under copyright in the current country and/or some kind of library version of CCLI churches use for music!

This idea came to me after reading this blog post from the Internet Archive (famous for the Wayback Machine).

Libraries have always bought publisher’s products but have traditionally offered alternative access modes to these materials, and can again. As an example let’s take newspapers. Published with scoops and urgency, yesterday is “old news,” the paper it was printed on is then only useful the next day as “fish wrap”– the paper piles up and we felt guilty about the trash. That is the framing of the publisher: old is useless, new is valuable.

…the library is in danger in our digital world. In print, one could keep what one had read. In digital that is harder technically, and publishers are specifically making it harder.

So why not enable a [modest] money-making function for your local library? With resources from places like the Internet Archive, the Gutenberg Project, Kindle free books, blog posts, and on and on – there’s a veritable cornucopia of formerly-available (or only digitally-available) material that has value, but whose availability is sadly lacking: especially for those who don’t have reliable internet access, eReaders, etc. (Or folks like me who don’t especially like reading most books (especially fiction) on a device.)

I’d wager Creative Commons could gin-up some great licenses for this!

Who’s with me‽

chelsea troy – designing a course

Via the rands-leadership Slack (in the #i-wrote-something channel), I found an article written on ChealseaTroy.com that was [the last?] in her series on course design.

While I found part 9 interesting, I was bummed there were no internal links to the other parts of the series (at least to previous parts (even if there may be future parts not linked in a given post)).

To rectify that for my 6 readers, and as a resource for myself, here is a table of contents for her series:
  1. What will students learn?
  2. How will the sessions go?
  3. What will we do in a session?
  4. Teaching methods for remoteness
  5. Why use group work?
  6. Dividing students into groups
  7. Planning collaborative activities
  8. Use of surveys
  9. Iterating on the course
She also has some other related, though not part of the “series”, posts I found interesting:
  1. Learning to teach a course
  2. Planning and surviving a 3-hour lecture
  3. Resources for programming instructors
  4. Syllabus design

If you notice future entries to this series (before I do), please comment below so I can add them 🤓