Category Archives: commentary

“like” problems: social ‘voting’ is a bad idea

The news story making the rounds about Facebook the past few days indicates they’re working on a kind of “dislike” button.

The problem with the Facebook “like” button is the same problem Google has with Google+ and their “+1” button: it doesn’t tell you anything meaningful.

Voting on Reddit doesn’t really convey much meaning, either.

Stack Overflow tries to address this with its up/down voting and being able to see the gestalt votes as a ratio (if your rep is high enough (an admittedly low bar, but still a bar, and an aspect of the gamification of Stack Oveflow)). But that doesn’t really cut it, either.

The problem with online “voting” (or “liking”, or “plussing”, etc) is that it is a dimensionless data point.

Does getting 300 “likes” on a post make it “good”? Does it reflect on its quality in any way? How about getting nearly 400 upvotes (and only a handful of downvotes) on a question about MySQL (along with 100+ “favorites”) mean the question is good? Does it show something is popular? Are people clicking the vote mechanism out of peer pressure, because they actually agree, or because they think it needs more visibility? Or something else entirely?

Dimensionless data that gets used as if it has meaning is a problem – one of many problems of social media and web sites in general.

Of course, you will object, quality is a potentially-subjective term – what does “quality” mean, exactly, when talking about a post, website, question, etc? Is it how well-written it is? Is it how long? How funny? How sad?

Take this question I asked on Stack Overflow, “CSS – how to trim text output?” It’s clearly-written, was answered excellently in 2 minutes, and is a “real” problem I had. Yet in the 4.5 years since asking, it’s only gotten 2 votes total (both “up”, but still only two).

Reddit has upvotes and downvotes – and your comment/post score is merely the sum of the ups and downs; below a certain [relative] threshold, you won’t see content unless you ask for it.

One of the biggest problems with all of these systems is that the “score” doesn’t actually tell you anything. An atheist subreddit, for example, will tend to downvote-into-oblivion comments that are theistic in nature (especially from Christians). Quora‘s voting system is highly untransparent – downvotes don’t really seem to mean much, and upvotes are pretty much just for show.

This derives from the fact that these sites use dimensionless data and try to give it a value or meaning outside of what it really is – a number.

What should be shown is the total number of “votes” a given post has gotten – positive negative, reshare, etc – but never combined. A ratio could be displayed, but the sum of the votes is a poor plan.

Facebook, Google+, and others should offer various voting options – “up”, “down”, “disagree”, “agree”, “share”, and possibly others – some of which may be mutually-exclusive (you cannot upvote and downvote the same thing), but you might downvote something you agree with (or upvote something you disagree with) just because of how it is written/presented, etc.

And the total of each type of click should be shown – show me 10,000 people disagreed with what I said, 15,000 agreed; 20,000 upvoted, and 30,000 downvoted; 12,000 reshared it (with, or without, comment).

Using voting as a means of hiding things (and trying to prevent others from seeing them) can be somewhat akin to online bullying – revenge voting has its problems; as does blindly upvoting anything a particular person says/does. Which is why assigning (and then displaying) dimensionless data anything more than a count is dangerous.

half year update: how are my predictions so far?

Back in Feb, I published a list of tech-related predictions for 2015.

How’m I doing?

Let’s see ones that have happened (or are very close to have happened):

  • Itanium OEL’d
  • HP spinning-off business units – sorta, they’re splitting in half
  • IBM is losing value … but not as much as I predicted (yet)
  • cloud is still “a thing” – but it’s gradually becoming less of “a thing”
  • cloud hosting providers are in a price war – so I’ll count this as “kinda”
  • iPad 5 – it’s the iPad Pro, but has the expected resolution (5.6 megapixels)
  • I’m counting the iPad Pro, in conjunction with the keyboard accessory, the MacBook Flex – it’s not x86 (ARM A9X) .. but still iOS, not OS X – so I’m half right
  • Tesla has the Model S in a non-millionaire price point ($57k at the bottom end) .. but it’s not down to the Chevy Volt or Nissan Leaf yet :: then again, the Tesla gets substantially further on its charge than does the Volt or Leaf
  • more cities are entering the “gigacity” club – Salisbury NC just opened the 10-gigabit club

automation is a multiplier

Multipliers. They’re ubiquitous – from ratchet wrenches to fertilizer, blocks-and-tackle to calculators, humans rely on multipliers all the time.

Multipliers are amazing things because they allow an individual to “do more with less” – a single person can build a coral castle with nothing more complex than simple machines. Or move 70 people at 70 miles per hour down an interstate merely by flexing his foot and twitching his arm.

Feats and tasks otherwise impossible become possible due to multipliers.

Automation is a multiplier. Some automating is obviously multiplicative – robots on assembly lines allow car manufacturers to output far more vehicles than they could in the pre-robot era. Even the assembly line is an automating force, and multiplier regarding the number of cars that could be produced by a set number of people in a given time period.

In the ever-more-constrained world of IT that I orbit and transit through – with salary budgets cut or frozen, positions not backfilled, and the ever-growing demands of end-users (whether internal or external), technicians, engineers, project managers, and the like are always being expected to do more with the same, or do more with less.

And that is where I, and the toolsets I work with, come into play – in the vital-but-hidden world of automation. Maybe it’s something as mundane as cutting requisition-to-delivery time of a server or service from weeks to hours. Maybe it’s something as hidden as automatically expanding application tiers based on usage demands – and dropping extra capacity when it’s no longer needed (one of main selling points of cloud computing). The ROI of automation is always seen as a multiplier – because the individual actor is now able to Get Things Done™ and at least appear smarter (whether they are actually any smarter or not is a totally different question).

Go forth and multiply, my friends.

vision for lexington

Over the past 5 years, I have witnessed some of the growth Lexington KY has started to undergo. From a population in the city proper of about 260,000 in 2000 to 295,000 in 2010 to an estimated 315,000 in 2015,

While there seems to be something of a plan/vision for the downtown area, the majority of Lexington (and its urban area) seems to be more-or-less ignored from an infrastructural perspective (the last update was in 2009, and only for a small part of Lexington).

Public Transit

The public transit system, as hard as I am sure Lextran employees work, is underutilized, poorly routed, and has no means of connecting into it form out of Lexington (full route map (PDF)).

In comparison to where I grew up, the Capital District of New York, the public transit system is both too inwardly-focused, and too poorly-promoted to be useful more most Lexingtonians. CDTA, for example, has connectors to other cities and towns other than just Albany. You can start where I grew up in Cohoes (about 10 miles north of Albany), and get more-or-less anywhere in the greater Capital District by bus. It might take a while, but you can get there (or get close). There are also several Park’n’Ride locations for commuters to take advantage of.

Lextran doesn’t offer anything to connect to Nicholasville, Versailles, or Georgetown. With workers commuting-in from those locales (and more – some come from Richmond or Frankfort (or go in the opposite direction)), one would think urban planners would want to offer alleviations of traffic congestion. But there is nothing visible along those lines.

Lost Neighborhoods

There are large chunks of Lexington where the houses are crumbling, crime rates are higher than the rest of the city, and the citizens living there are being [almost] actively avoided and/or neglected by the city.

Some limited business development has gone into these neighborhoods (like West 6th Brewing), but as a whole they are becoming places “to be avoided”, rather than places where anyone is taking time and effort to improve, promote, and generally line-up with the rest of the city.

Yes, everywhere has regions that folks try to avoid, but the lost and dying neighborhoods in Lexington are saddening.


Lexington is – in places – a walkable city, but for most of the residential areas, it was/is up to the developers of the subdivisions as to whether or not there are sidewalks. And if they weren’t put in then, getting them done now is like pulling teeth.

Being able to walk to many/most places (or types of places) you might want to go is one of the major hallmarks of a city. One that is only exhibited in pockets in Lexington.

It should even be a hallmark of shopping areas – but look at Hamburg Pavillion. A shopping, housing, and services mini town (apartments, condos, houses, banking, education, restaurants, clothes, etc), Hamburg is one of the regional Meccas for folks who want to do major shopping trips or eat at nice restaurants. The map (PDF), however (which only shows part of the Hamburg complex) demonstrates that while pockets of the center are walkable, getting from one shopping/eating/entertainment pod to another requires walking across large parking lots – impractical if shopping with children, or when carrying more than a couple bags.

Crosswalks and lighted crossings on major roads, in some cases, leave mere seconds to spare before the light changes – if you’re moving at a crisp clip. Add a stroller, collapsible shopping cart, or heavy book bag, and several crossings become “safe” only if drivers see you are already crossing and wait for you. Stories like of pedestrians being hit, like this one, are far too common to read in local news media.


There is no lack of employment opportunities in the Lexington area – there are 15 major employers in Lexington, hundreds of small-to-medium businesses running the gamut of offerings from auto dealers to lawn care, IT to healthcare, equine products, home construction, etc; and hundreds of national chains (retail, restaurants, services, etc) are here, too.

Finding said employment can be difficult, though. There are some services like In2Lex which send newsletters with employment opportunities – but if you don’t know about them, finding work in the area isn’t as easy as one would think a Chamber of Commerce would want. Yes, employers need to advertise their openings, but even finding lists of companies in the area is difficult.

Connectivity to Other Areas

Direct flights into and out of Lexington Bluegrass Airport reach 15 major metro areas across half the country.

Interstates 75 and 64 cross just outside city limits.

The Underlying Problem

The major problem Lexington seems to have is that it doesn’t know it’s become a decent-sized metropolitan area. There are about 500,000 people in MSA, or about 12% the population of the whole state. It’s a little under half the size of the Louisville MSA (which includes a couple counties in Indiana). There are 8 colleges/universities in Lexington alone (PDF), and 15 under an hour from downtown.

To paraphrase Reno NV’s slogan, Lexington is the biggest little town in Kentucky. The last major infrastructural improvement done was Man O’ War Boulevard, completed in 1988 – more than a quarter century past. There were improvements done to New Circle Road in the 1990s, but that ended over 15 years ago. Lexington proper was 30% smaller in 1990 than it is now (225,000 vs 315,000).

Lexington’s 65+ year-old Urban Service Area, while great to maintain the old character of the city and region, hasn’t been reviewed since 1997. A few related changes have been added since, but the last of those was in 2001.

One and a half decades since major infrastructural improvements. Activities like the much-delayed Centre Point (which I agree doesn’t need to be done in the manner originally planned), the begun Summit, and other development projects may, eventually, be good for business and the city as a whole, but there has been little-to-no consideration for what will happen with traffic. Traffic problems and general accessibility is one of the core responsibilities of local government.

The double diamond interchange installed a couple years back on Harrodsburg Rd was a good improvement to that intersection. But it was only good for that intersection. It alleviated some traffic concerns, crashes, and complications, but only on one road.

Lexington needs leadership that sees where the city not only was 10, 25, 50 years ago, but where it is now and where it wants to be in another 10, 20, 50 years.

My Vision

My vision for Lexington, infrastructurally, includes interchange improvements / rebuilds for more New Circle Road exits. Exit 7, Leestown Road, grants access to Coke, FedEx, Masterson Station, the VA hospital, a BCTC campus, and more. Big Ass Fans is between exit 8 from New Circle and  exit 118 of I-75. Exit 9 from New Circle more-or-less exists to provide Lexmark with a way for their employees to arrive. The major employers in the area are great for economic stability. But with traffic congestion, getting into and out of them needs to be as smooth as possible.

West Sixth Brewery and Transylvania University are two of the highlights in an otherwise-aging, -dying, and -lost area of the city. There needs to be a public commitment on the part of both the city and the citizenry to not allow the city to become segregated. Not segregated based on skin tone, but on economic status.

Bryan Station High School has a reputation, deservedly or not, of being one of the worst high schools in the region, because of the dying/lost status of the parts of town it draws from. You can buy a 2 bedroom, 1 bath, 1300 square foot house for under $20,000 near Bryan Station. It needs a little bit of work, but what does that say about the neighborhood?

The leadership of Lexington seems to be ignoring parts of the city that are going downhill, preferring instead to focus on regions that are going up. Ignoring dying parts of the city from an infrastructural perspective isn’t going to make them any better – they will only drag more of the city down with them. As a citizen and a homeowner, I want to see my city do well.

I do not like paying taxes any more than anyone else, but I do like seeing the city taking initiative and working to both heal itself and take steps towards attracting future generations, businesses, and more that we don’t even know are coming.

Lexington has great promise – it is growing, expanding, and burgeoning. But if its leadership – political, business, and citizenry – doesn’t take the time, effort, and money to ensure it’s prepared for this growth, it will become a morass to traverse, live in, and do business with.

Some more interesting regional data (PDF)

what level of abstraction is appropriate?

Every day we all work at multiple levels of abstraction.

Perhaps this XKCD comic sums it up best:

If I'm such a god, why isn't Maru *my* cat?

But unless you’re weird and think about these kinds of things (like I do), you probably just run through your life happily interacting at whatever level seems most appropriate at the time.

Most drivers, for example, don’t think about the abstraction they use to interact with their car. Pretty much every car follows the same procedure for starting, shifting into gear, steering, and accelerating/decelerating: you insert a key (or have a fob), turn it (or push a button), move the drive mode selection stick (gear shift, knob, etc), turn a steering wheel, and use the gas or brake pedals.

But that’s not really how you start a car. It’s not really how you select drive mode. It’s not really how you steer, etc.

But it’s a convenient, abstract interface to operate a car. It is one which allows you to adapt rapidly to different vehicles from different manufacturers which operate under the hood* in potentially very different ways.

The problem with any form of abstraction is that it’s just a summary – an interface – to whatever it is trying to abstract away. And sometimes those interfaces leak. You turn the key in your car and it doesn’t start. Crud. What did I forget to do, or is the car broken? Did I depress the brake and clutch pedal? Is it in Park? Did I make sure to not leave the lights on overnight? Did the starter motor seize? Is there gas in the tank? Did the fuel pump quit? These are all thoughts that might run through your mind (hopefully in decreasing likelihood of probability/severity) when the simple act of turning the key doesn’t work like you expect.

For a typical computer user, the only time they’ll even begin to care about how their system really works is when they try to do something they expect it to do … and it doesn’t. Just like drivers don’t think about their cars’ need for the fuel injector system to make minute adjustments thousands of times per second, most people don’t think about what it actually takes to go from typing “” in their browser bar to getting the website returned (or how their computer goes from off to “ready to use” after pushing the power button).

Automation provides an abstraction to manual processes (be it furniture making or tier 1 operations run book scenarios). And abstractions are good things .. except when they leak (or outright break).

Depending on your level of engagement, the abstraction you need to work with will differ – but knowing that you’re at some level of abstraction (and, ideally, which level) is vital to being the most effective at whatever your role is.

I was asked recently how a presentation on the benefits of automation would vary based on audience. The possible audiences given in the question were: engineer, manager, & CIO. And I realized that when I’ve been asked questions like this before, I’ve never answered them wrong, but I’ve answered them very inefficiently: I have never used the level of abstraction to solve the general case of what this question is really getting at. The question is not about whether or not you’re comfortable speaker to any given “level” of customer representative (though it’s important). It is not about verifying you’re not lying about your work history (though also important).

No. That question is about finding out if you really know how to abstract to the proper level (in leakier fashions as you go upwards assumed) for the specific “type” of person you are talking to.

It is vital to be able to do the “three pitches” – the elevator (30 second), the 3 minute, and the 30 minute. Every one will cover the “same” content – but in very different ways. It’s very much related to the “10/20/30 rule of PowerPoint” that Guy Kawasaki promulgates: “a PowerPoint presentation should have ten slides, last no more than twenty minutes, and contain no font smaller than thirty points.” Or, to quote Winston Churchill, “A good speech should be like a woman’s skirt; long enough to cover the subject and short enough to create interest.”

The answer that epiphanized for me when I was asked that question most recently was this: “I presume everyone in the room is ‘as important’ as the CIO – but everyone gets the same ‘sales pitch’ from me: it’s all about ROI. The ‘return’ on ‘investment’ is going to look different from the engineer’s, manager’s, or CIO’s perspectives, but it’s all just ROI.”

The exact same data presented at three different levels of abstraction will “look” different, even though it’s conveying the same thing – because the audience’s engagement is going to be at their level of abstraction (though hopefully they understand at least to some extent the levels above (and below) themselves).

A simple example: it currently takes a single engineer 8 hours to perform all of the tasks related to patching a Red Hat server. There are 1000 servers in the datacenter. Therefore it takes 8000 engineer-hours to patch them all.

That’s a lot.

It’s a crazy lot.

But I’ve seen it countless times in my career. It’s why patching can so easily get relegated to a once-a-year (or even less often) cycle. And why so many companies are woefully out-of-date with their basic systems from known issues. If your patching team consists of 4 people, it’ll take them a year to patch all 8000 systems – and then they just have to start over again. It’d be like painting the Golden Gate Bridge – an unending process.

Now let’s say you happen to have a management tool available (could be as simple as pssh with preshared SSH keys, or as big and encompassing as Server Automation). And let’s say you have a local mirror of RHN – so you can decide just what, exactly, of any given channel you want to apply in your updates.

Now that you have a central point from which you can launch tasks to all of the Red Hat servers that need to be updated, and a managed source from which each will source their updates, you can have a single engineer launch updates to dozens, scores, even hundreds of servers simultaneously – bringing them all up-to-date in one swell foop. What had taken a single engineer 8 hours is still 8 – but it’s 8 in parallel: in other words, the “same” 8 hours is now touching scores of machines instead of 1 at a time. The single engineer’s efficiency has been boosted by a factor of, say, 40 (let’s stay conservative – I’ve seen this number as high as 1000 or more).

Instead of it taking 8000 engineer-hours to update all 1000 servers, it’s now only 200. Your 4 engineer patching team can now complete their update cycle in well under 2 weeks. What had taken a full year, is now being measured in days or weeks.

The “return on investment” at the abstraction level of the engineer is they have each been “given back” 1900 hours a year to work on other things (which helps make them promotable). The team’s manager sees an ROI of >90% of his team’s time is available for new/different tasks (like patching a new OS). The CIO sees an ROI of 7800 FTE hours no longer being expended – which means the business’ need for expansion, with an associated doubling of server estate, is now feasible without having to double his patching staff.

Every abstraction is like that – there is a different ROI for a taxi driver on his car “just working” than there is for a hot rodder who’s truly getting under the hood. But it’s still an ROI – one is getting his return by being able to ferry passengers for pay, and the other by souping-up his ride to be just that little (or lot) bit better. The ROI of a 1% fuel economy improvement by the fuel injector system being made incrementally smarter in conjunction with a lighter engine block might only be measured in cents per hour driving – but for FedEx, that will be millions of dollars a year in either unburned fuel, or additional deliveries (both of which are good for their bottom line).

Or consider the abstraction of talking about financial statements (be they for companies or governments) – they [almost] never list revenues and expenditures down to the penny. Not because they’re being lazy, but because the scale of values being reported do not lend themselves well to such mundane thinking. When a company like Apple has $178 billion in cash on hand, no one is going to care if it’s really $178,000,102,034.17 or $177,982,117,730.49. At that scale, $178 billion is a close-enough approximation to reality. And that’s what an abstraction is – it is an approximation to the reality being expressed down one level. It’s good enough to say that you start your car by turning the key – if you’re not an automotive engineer or mechanic. It’s good enough to approximate the US Federal Budget at $3.9 trillion or maybe $3900 billion (whether it should be that high is a totally different topic). But it’s not a good approximation to say $3,895,736,835,150.91 – it may be precise, but it’s not helpful.

I guess that means the answer to the question I titled this post with is, “the level of abstraction appropriate is directly related to your ‘function’ in relation to the system at hand.” The abstraction needs to be helpful – the minute it is no longer helpful (by being either too approximate, or too precise), it needs to be refined and focused for the audience receiving it.

*see what I did there?

the loss of the shared social experience

On a recent trip I met up with an old friend and his wife for dinner. As conversation progressed, I mentioned my wife and I have been watching M*A*S*H on Netflix. Waxing nostalgic for a moment, he told me that his parents let him stay up to watch the series finale in 1983.

And then he said something that I found fascinating: “you know, there’s nothing like that today – there’s no shared social experience you can expect to talk about the next day with your coworkers, friends, etc.”

And it’s true – sure, there are local shared experiences (NCAA games, etc), but there is nothing in today’s society that brings us all to the same place (even separately) like TV did in the pre-streaming and -DVR era.

There used to be top-rated programs that you could reasonably expect that a high percentage of your coworkers watched (M*A*S*H, The Cosby Show, ER, Friends, Cheers, All in the Family, Family Matters, etc). There still are highly-rated programs – but they’re very very different from what they used to be. Some of this, of course, comes from the rise of cable networks’ programming efforts (The Sopranos, Mythbusters, Mad Men, Breaking Bad, Game of Thrones, Stargate SG1, The Walking Dead, Switched at Birth, Secret Life of the American Teenager, Outlander, and more). Some of this comes from the efforts of streaming providers (House of Cards, Orange is the new Black, Farmed and Dangerous, etc). And there are still great shows on broadcast TV (Once Upon a time, CSI, Person of Interest, etc). But they’re different than what they used to be.

Not different merely because of better acting (sometimes it’s worse), better writing (same critique applies), better filming (Revolution – I’m looking at you as the antiexample of good filming, and why you got canceled after just two seasons), better marketing, or better special effects.

But mostly they’re different it’s because we, as a culture, have decided we do not want to be tied to an arbitrary time-table dictated to us by the Powers That Be™ at The Networks™. With the rise in un-tie-ability given to consumers, first with VCRs, then VCR+, then TiVo, and now DVRs and streaming options everywhere, even though we’ve been getting bilked on film time (an “hour long” program in the early 80s was 48-49 minutes of screen time, today it’s ~42 minutes – that’s a huge amount of added advertising time) from our programs, we have ways of compressing and massaging our watching to our personal schedules. Can’t be home in time to catch insert-name-of-series-here? No problem! It’ll be on Hulu or Amazon Prime tomorrow, or your DVR will catch it for you. Or it’ll be on Netflix in a few months.

And if you get it on Amazon Prime or Netflix, there’ll be no ads. Hulu may have a few, but they’re still shorter than what was shown on ABC the night before.

It used to be that the Superbowl was a major sporting event at the beginning of each year when the culmination of 17 weeks of regular season play, and a few playoff games, showed us just who was the best football team out there.

No more.

Now the Superbowl is a chance to see new commercials from scores of companies – each of whom has spent millions just to get the ad on TV, let alone film it – and maybe catch a little bit of a game on the side. (Unless you happen to care about the Seattle Seahawks – but I digress.)

Before widespread adoption of TV, the shared social experience would’ve had to have surrounded radio programs (perhaps The Lone Ranger, or Orson Welles’ production of The War of the Worlds).

And prior to widespread radio, what shared social experiences did society (not just little pockets) have? Gladiatorial combat in ancient Rome? The Olympic Games?

Which really means that shared social experiences a la the M*A*S*H finale are an historical aberration – something that came to be less than a century ago, and which lasted less than a century. Something as fleeting as the reign of clipper ships in transport, from a grand historical perspective.

And maybe that’s a Good Thing™ – society being drawn together over common experiences isn’t, necessarily, bad: but is it necessarily good? That’s the question that has been bugging me these last couple weeks – and which probably will for some time to come.

What say you – is it a loss, a gain, or just a fact that these shared social experiences are no more?

please reply at top

There is a constant war over top-repliers, bottom-repliers, and inline-repliers.

If you’re replying to an email, reply at the top. Unless there is some overarching need to reply inline (hint – it is very very rare).

Bottom-replying makes me have to reread all the crap that has been left from previous messages before I get to what you wrote – what a phenomenal waste of time*!

Just reply at the top. Like every sane person does.


*Yes, you should also trim whatever you don’t need when you reply – but that’s another story.