Tag Archives: networking

watch your mtu size in openstack

For a variety of reasons related to package versions and support contracts, I was unable to use the Red Hat built KVM image of RHEL 7.2 for a recent project. (The saga of that is worthy of its own post – and maybe I’ll write it at some point. But not today.)

First thing I tried was to build an OpenStack instance off of the RHEL 7.2 media ISO directly – but that didn’t work.

So I built a small VM on another KVM host – with virt-viewer, mirt-manager, etc – got it setup and ready to go, then went through the process of converting the qcow image to raw, and plopping it into the OpenStack image inventory.

Then I deployed the two VMs I need for my project (complete with additional disk space, yada yada yada). So far, so good.

Floating IP assigned to the app server, proper network for both, static configs updated. Life is good.

Except I cannot ssh out from the newly-minted servers anywhere. Or if it will ssh out, it’s super laggy.

I could ssh-in, but not out. I could scp out (to some locales, but not others), but was not getting nearly the transfer rates I should have been seeing. Pings worked just fine. So did nslookup.

After a couple hours of fruitless searching, got a hold of one of my coworkers who setup our OpenStack environment: maybe he’d know something.

We spent another about half hour on the phone, when he said, “hey – what’s your MTU set to?” “I dunno – whatever’s default, I guess. “Try setting it to 1450.”

Why 1450? What’s wrong with the default of 1500? Theoretically, the whole reason defaults are, well, default, is that they should “just work”. In other words, they might not be optimal for your situation, but they should be more-or-less optimalish for most situations.

Unless you’re in a basically-vanilla “layered networking” environment (apologies if “layered networking” is the wrong term, it’s the one my coworker used, and it made sense to me – networking isn’t really my forte). Fortunately, my colleague had seen an almost-identical problem several months ago playing with Docker containers. The maximum transmission unit is the cap on the network packet size, which is important to set in a TCP/IP environment – otherwise devices on the network won’t know how much data they can see at once from each other.

1500 bytes is the default for most systems, as I mentioned before, but when you have a container / virtual machine / etc hosted on a parent system whose MTU is set to 1500, the guest cannot have as large an MTU because then the host cannot attach whatever extra routing bits it needs to identify which guest gets what data when it comes back. For small network requests, such as ping uses, you’re nowhere near the MTU, so they work without a hitch.

For larger requests, you can (and will) start running into headspace issues – so either the guest MTU needs to shrink, or the host needs to grow.

Growing the host’s MTU isn’t a great option in a live environment – because it could disrupt running guests. So shrinking the guest MTU needs to be done instead.

Hopefully this helps somebody else.

Now you know, and knowing is half the battle.

plogging?

Wired Magazine recently had an article on the rise of “plogging“.

By their definition, “plogging” is “PLatform blOGGING” – or blogging as part of a network/site/service (DZone, LinkedIn, Medium, Facebook, etc) instead of running your own blog somewhere (WordPress.com, Blogger, self-hosted WordPress, etc).

This seems to be a modern representation of what newspapers, magazines, etc used to be (and still are, to some extent) – a place where you can find your favorite authors all in one place.

There certainly are benefits to this model – but there is also a loss of a sense of personal connection in such a model. As I wrote before, the facebookification of society has some pros and cons. One of those cons is that companies increasingly (and now, apparently, writers) are branding on the platform/network instead of via their own site and service.

The instant network aspect of “plogging” has appeal – otherwise why would Sett exist? Or Stumbleupon? Or any of myriad other networking sites and services.

Heck, remember back in the Good Ole Days when you had link sharing and webrings?

This also plays into the walled garden effect that AOL had 20 years ago: as I wrote yesterday, Facebook is merely the new AOL. Writing in an established (or establishing) network makes a great deal of sense – an “instant” audience, the “rising tide” effect, etc.

But it also means you are bound, for better or worse, to the rules and regulations, guidelines and gaffes of the site/service you decide to write on and with. Community building is hard. Administering built communities is hard. And it doesn’t get any easier by deciding to go all-in with a “platform”. (It may not be any harder, either – but it’s not quantitatively eased by any stretch.)

Forum tools have been around since the dawn of time. And every one has had its rules. From the Areopagus to Stack Overflow, synagogues to the Supreme Court, every community has its rules. Rules which you may either choose to abide by, petition to change, or ignore (to your “detriment”, at least in the context of continuing to participate in said community).

I guess it’s like they say, “what’s new is old again”.

apps on the network

{This started as a Disqus reply to Eric’s post. Then I realized blog comments shouldn’t be longer than the original post 🙂 }

The app-on-network concept is fascinating: and one I think I’ve thought about previously, too.

Hypothetically, all “social networks” should have the same connections: yet there’s dozens upon dozens (I use at least 4 – probably more, but I don’t realize it). And some folks push the same content to all of them, while others (including, generally, myself) try to target our shares and such to specific locations (perhaps driving some items to multiple places with tools like IFTTT).

Google’s mistake with Google+ was thinking they needed to “beat” Facebook: that’s not going to happen. As Paul Graham notes:

“If you want to take on a problem as big as the ones I’ve discussed, don’t make a direct frontal attack on it. Don’t say, for example, that you’re going to replace email. If you do that you raise too many expectations…Maybe it’s a bad idea to have really big ambitions initially, because the bigger your ambition, the longer it’s going to take, and the further you project into the future, the more likely you’ll get it wrong…the way to use these big ideas is not to try to identify a precise point in the future and then ask yourself how to get from here to there, like the popular image of a visionary.”

That’s where folks who get called things like The Idea Guy™ go awry: instead of asking questions, you try to come up with ideas – like these 999. And if you can’t/don’t, you think you’ve failed.

Social networks should be places where our actual social interactions can be modeled effectively. Yet they turn into popularity contests. And bitch fests. And rant centers. Since they tend towards the asymmetric end of communication, they become fire-and-forget locales, or places where we feel the incessant need to be right. All the time. (Add services like Klout and Kred, and it gets even worse.)

I would love to see a universal, portable, open network like the one Eric describes. All the applications we think run on social networks (like Farmville) don’t. They run on top of another app which runs on “the network”.

Layers on layers leads to the age-old problem of too many standards, and crazy amounts of abstraction. Peeling-back the layers of the apps atop the network could instead give us the chance to have a singular network where types of connections could be tagged (work, fun, school, family, etc, etc – the aspect of G+ that everyone likes most: “circles”). Then the app takes you to the right subset of your network.

Of course – this all leads to a massive problem: security.

If there is only One True Social Network, we all end up entrusting everything we put there to be “safe”. And while some of still follow the old internet mantra, “if you wouldn’t put it on a billboard, don’t put it on a website,” the vast majority of people – seemingly especially those raised coincident to technology’s ubiquitization – think that if they put it somewhere “safe” (like Facebook), that it should be “private”.

After all, the One True Social Network would also be a social engineer’s or identity thief’s Holy Grail – the subversive access to all  of someone’s personal information would be their nirvana.

And that, I think, is the crux of the matter: regardless of what network (or, to use Eric’s terminology, what app-atop-the-network) we use, privacy, safety, and security are all forefront problems.

Solve THAT, and you solve everything.

Or maybe you just decide privacy/security doesn’t matter, and make it all public.

community building is hard

Establishing and building a community around a common interest is hard.

After exhausting your network of friends, coworkers, neighbors, etc – the only way of getting new folks into the community is to aggressively campaign and advertise to them.

Let’s say you’re a technical user group (like a couple of the ones I’m a part of). And every month you have about 5-7 folks who show up on the Appointed Day™ for the regular meetup. You can either be satisfied on the size of the group, or you can try to grow it.

Growing it, however, is never easy: there are scheduling conflicts, personality clashes, lack of contacts, etc.

What are the best ways – or even just “ways”, ditch the “best” – of growing a community after you have gone through everyone you know?

a smart[ish] dhcpd

After running into some wacky networking issues at a recent customer engagement, I had a brainstorm about a smart[ish] DHCPd server that could work in conjunction with DNS and static IP assignment to more intelligently fill subnet space.

Here’s the scenario we had:

Lab network space is fairly-heavily populated with static assigned addresses – in a /23 network, ie ~500 available address on the subnet, about 420 addresses were in use.

Not all statically-assigned IPs were registered in DNS.

The in-use addresses were did not leave much contiguous, unused space (little groups of 2 or4 addresses open – not ~80, or even a couple small batches of 20-30 in a row).

DNS was running on a Windows 2012 host.

DHCPd (ISC’s) was setup on an RHEL 5×64 Linux machine.

The problem with using the ISC DHCPd server, as supplied by HPSA, is that while you can configure multiple subnets to hand-out addresses on, you cannot configure multiple ranges on a single subnet. So we were unable to effectively utilize all the little gaps in assigned addresses.

Maybe this is something DNS/DHCP can do from a Windows DC, but I have an idea for how DHCPd could work a little smarter:

  • give a very large range on a given subnet (perhaps all but the gateway and broadcast addresses)
  • before handing an address out, in addition to checking the leases file for if it is free, check against DNS to see if it is in use
  • if an address is in use because it is static, update the leases file with the statically-assigned information as if it were assigned dynamically – but give it an unusually-long lease time (eg 1 month instead of 4 hours)
  • on a periodic basis (perhaps once an hour, day, week – it should be configurable), scan the whole subnet for in-use addresses (via something like nmap and checking against DNS)
    • remove all lease file entries for unused/available IPs
    • update lease file entries for used/unavailable IPs, if not already recorded

This would have the advantage of intelligently filling address gaps on a given subnet, and require less interaction between teams that want/need to be able to use DHCP and those that need/want static addresses.

Or maybe what I’m describing has already been solved, and I just don’t know how to find it.

datacenter bandwidth charges can be crazy

Why are colocation bandwidth rates so crazy expensive? In an era of ubiquitous broadband to the home, why are connections in datacenters still so expensive?

I see charges on the per-GB-transferred scale, or flat-rate charges per MB of bandwidth. I have yet to figure out why these rates can vary so wildly even in datacenters in the same geographical region. It’s not like it costs Sprint or Level3 [noticeably] any more to use the fiber they’ve already laid to have more systems utilizing it. Yet costs go up every year, even though available speeds haven’t shown major improvements in the last few years.

So I ask, what causes these charges?