Category Archives: personal

welcome, zebediah!

We got to meet the latest addition to our family a few days ago, on the 5th. For the second time in under a year, we had the last-minute opportunity to adopt a baby boy. Last year we welcomed a 3.5 month-old, and this year we have a newborn.

He’s had some complications, and been in the NICU since a few hours after birth. However, he’s started to make some good progress, and while not out of the woods, is on his way to being able to come home in, hopefully, a week.

Zebediah joins big brother Abijah, and brings our family from three to four.

vision for lexington

Over the past 5 years, I have witnessed some of the growth Lexington KY has started to undergo. From a population in the city proper of about 260,000 in 2000 to 295,000 in 2010 to an estimated 315,000 in 2015,

While there seems to be something of a plan/vision for the downtown area, the majority of Lexington (and its urban area) seems to be more-or-less ignored from an infrastructural perspective (the last update was in 2009, and only for a small part of Lexington).

Public Transit

The public transit system, as hard as I am sure Lextran employees work, is underutilized, poorly routed, and has no means of connecting into it form out of Lexington (full route map (PDF)).

In comparison to where I grew up, the Capital District of New York, the public transit system is both too inwardly-focused, and too poorly-promoted to be useful more most Lexingtonians. CDTA, for example, has connectors to other cities and towns other than just Albany. You can start where I grew up in Cohoes (about 10 miles north of Albany), and get more-or-less anywhere in the greater Capital District by bus. It might take a while, but you can get there (or get close). There are also several Park’n’Ride locations for commuters to take advantage of.

Lextran doesn’t offer anything to connect to Nicholasville, Versailles, or Georgetown. With workers commuting-in from those locales (and more – some come from Richmond or Frankfort (or go in the opposite direction)), one would think urban planners would want to offer alleviations of traffic congestion. But there is nothing visible along those lines.

Lost Neighborhoods

There are large chunks of Lexington where the houses are crumbling, crime rates are higher than the rest of the city, and the citizens living there are being [almost] actively avoided and/or neglected by the city.

Some limited business development has gone into these neighborhoods (like West 6th Brewing), but as a whole they are becoming places “to be avoided”, rather than places where anyone is taking time and effort to improve, promote, and generally line-up with the rest of the city.

Yes, everywhere has regions that folks try to avoid, but the lost and dying neighborhoods in Lexington are saddening.

Walking

Lexington is – in places – a walkable city, but for most of the residential areas, it was/is up to the developers of the subdivisions as to whether or not there are sidewalks. And if they weren’t put in then, getting them done now is like pulling teeth.

Being able to walk to many/most places (or types of places) you might want to go is one of the major hallmarks of a city. One that is only exhibited in pockets in Lexington.

It should even be a hallmark of shopping areas – but look at Hamburg Pavillion. A shopping, housing, and services mini town (apartments, condos, houses, banking, education, restaurants, clothes, etc), Hamburg is one of the regional Meccas for folks who want to do major shopping trips or eat at nice restaurants. The map (PDF), however (which only shows part of the Hamburg complex) demonstrates that while pockets of the center are walkable, getting from one shopping/eating/entertainment pod to another requires walking across large parking lots – impractical if shopping with children, or when carrying more than a couple bags.

Crosswalks and lighted crossings on major roads, in some cases, leave mere seconds to spare before the light changes – if you’re moving at a crisp clip. Add a stroller, collapsible shopping cart, or heavy book bag, and several crossings become “safe” only if drivers see you are already crossing and wait for you. Stories like of pedestrians being hit, like this one, are far too common to read in local news media.

Employment

There is no lack of employment opportunities in the Lexington area – there are 15 major employers in Lexington, hundreds of small-to-medium businesses running the gamut of offerings from auto dealers to lawn care, IT to healthcare, equine products, home construction, etc; and hundreds of national chains (retail, restaurants, services, etc) are here, too.

Finding said employment can be difficult, though. There are some services like In2Lex which send newsletters with employment opportunities – but if you don’t know about them, finding work in the area isn’t as easy as one would think a Chamber of Commerce would want. Yes, employers need to advertise their openings, but even finding lists of companies in the area is difficult.

Connectivity to Other Areas

Direct flights into and out of Lexington Bluegrass Airport reach 15 major metro areas across half the country.

Interstates 75 and 64 cross just outside city limits.

The Underlying Problem

The major problem Lexington seems to have is that it doesn’t know it’s become a decent-sized metropolitan area. There are about 500,000 people in MSA, or about 12% the population of the whole state. It’s a little under half the size of the Louisville MSA (which includes a couple counties in Indiana). There are 8 colleges/universities in Lexington alone (PDF), and 15 under an hour from downtown.

To paraphrase Reno NV’s slogan, Lexington is the biggest little town in Kentucky. The last major infrastructural improvement done was Man O’ War Boulevard, completed in 1988 – more than a quarter century past. There were improvements done to New Circle Road in the 1990s, but that ended over 15 years ago. Lexington proper was 30% smaller in 1990 than it is now (225,000 vs 315,000).

Lexington’s 65+ year-old Urban Service Area, while great to maintain the old character of the city and region, hasn’t been reviewed since 1997. A few related changes have been added since, but the last of those was in 2001.

One and a half decades since major infrastructural improvements. Activities like the much-delayed Centre Point (which I agree doesn’t need to be done in the manner originally planned), the begun Summit, and other development projects may, eventually, be good for business and the city as a whole, but there has been little-to-no consideration for what will happen with traffic. Traffic problems and general accessibility is one of the core responsibilities of local government.

The double diamond interchange installed a couple years back on Harrodsburg Rd was a good improvement to that intersection. But it was only good for that intersection. It alleviated some traffic concerns, crashes, and complications, but only on one road.

Lexington needs leadership that sees where the city not only was 10, 25, 50 years ago, but where it is now and where it wants to be in another 10, 20, 50 years.

My Vision

My vision for Lexington, infrastructurally, includes interchange improvements / rebuilds for more New Circle Road exits. Exit 7, Leestown Road, grants access to Coke, FedEx, Masterson Station, the VA hospital, a BCTC campus, and more. Big Ass Fans is between exit 8 from New Circle and  exit 118 of I-75. Exit 9 from New Circle more-or-less exists to provide Lexmark with a way for their employees to arrive. The major employers in the area are great for economic stability. But with traffic congestion, getting into and out of them needs to be as smooth as possible.

West Sixth Brewery and Transylvania University are two of the highlights in an otherwise-aging, -dying, and -lost area of the city. There needs to be a public commitment on the part of both the city and the citizenry to not allow the city to become segregated. Not segregated based on skin tone, but on economic status.

Bryan Station High School has a reputation, deservedly or not, of being one of the worst high schools in the region, because of the dying/lost status of the parts of town it draws from. You can buy a 2 bedroom, 1 bath, 1300 square foot house for under $20,000 near Bryan Station. It needs a little bit of work, but what does that say about the neighborhood?

The leadership of Lexington seems to be ignoring parts of the city that are going downhill, preferring instead to focus on regions that are going up. Ignoring dying parts of the city from an infrastructural perspective isn’t going to make them any better – they will only drag more of the city down with them. As a citizen and a homeowner, I want to see my city do well.

I do not like paying taxes any more than anyone else, but I do like seeing the city taking initiative and working to both heal itself and take steps towards attracting future generations, businesses, and more that we don’t even know are coming.

Lexington has great promise – it is growing, expanding, and burgeoning. But if its leadership – political, business, and citizenry – doesn’t take the time, effort, and money to ensure it’s prepared for this growth, it will become a morass to traverse, live in, and do business with.


Some more interesting regional data (PDF)

hey yahoo! sports – why not always post the magic number for every team?

Since the magic number (and I’ll take the example of baseball, because while I don’t get to watch them much, I do follow the Mets) is so easy to calculate, why not post it on the standings as soon as there have been games played?

This would be a good use of technology relative to baseball (or any sport).

In case you’re wondering, the math for the magic number is as follows:

G + 1 − WA − LB

where

  • G is the total number of games in the season
  • WA is the number of wins that Team A has in the season
  • LB is the number of losses that Team B has in the season

As of today, the magic number for the Mets is 162 + 1 – 12 – 6, or 145.

why do i use digital ocean?

Besides the fact that I have a referral code, I think Digital Ocean has done a great job of making an accessible, affordable, cloud environment for folks (like me) to spin-up and -down servers for trying new things out.

You can’t beat an average of 55 seconds to get a new server.

There are other great hosting options out there. I know folks who work at and/or use Rackspace. And AWS. Or Chunk Host.

They all have their time and place, but for me, DO has been the best option for much of what I want to do.

Their API is simple and easily-accessed, billing is straight-forward, and you can make your own templates to deploy servers from. For example, I could make a template for MooseFS Chunk servers so I could just add new ones whenever I need them to the cluster.

And I can expand/contract servers as needed, too.

create your own clustered cloud storage system with moosefs and pydio

This started-off as a how-to on installing ownCloud. But their own installation procedures don’t work for the 8.0x release and CentOS 6.

Most of you know I’ve been interested in distributed / cloud storage for quite some time.

And that I find MooseFS to be fascinating. As of 2.0, MooseFS comes in two flavors – the Community Edition, and the Professional Edition. This how-to uses the CE flavor, but it’d work with the Pro version, too.

I started with the MooseFS install guide (pdf) and the Pydio quick start steps. And, as usual, I used Digital Ocean to host the cluster while I built it out. Of course, this will work with any hosting provider (even internal to your data center using something like Backblaze storage pods – I chose Digital Ocean because they have hourly pricing; Chunk Host is a “better” deal if you don’t care about hourly pricing). In many ways, this how-to is in response to my rather hackish (though quite functional) need to offer file storage in an otherwise-overloaded lab several years back. Make sure you have “private networking” (or equivalent) enabled for your VMs – don’t want to be sharing-out your MooseFS storage to just anyone 🙂

Also, as I’ve done in other how-tos on this blog, I’m using CentOS Linux for my distro of choice (because I’m an RHEL guy, and it shortens my learning curve).

With the introduction out of the way, here’s what I did – and what you can do, too:

Preliminaries

  • spin-up at least 3 (4 would be better) systems (for purposes of the how-to, low-resource (512M RAM, 20G storage) machines were used; use the biggest [storage] machines you can for Chunk Servers, and the biggest [RAM] machine(s) you can for the Master(s))
    • 1 for the MooseFS Master Server (if using Pro, you want at least 2)
    • (1 or more for metaloggers – only for the Community edition, and not required)
    • 2+ for MooseFS Chunk Servers (minimum required to ensure data is available in the event of a Chunk failure)
    • 1 for ownCloud (while this might be able to co-reside with the MooseFS Master – this tutorial uses a fully-separate / tiered approach)
  • make sure the servers are either all in the same data center, or that you’re not paying for inter-DC traffic
  • make sure you have “private networking” (or equivalent) enabled so you do not share your MooseFS mounts to the world
  • make sure you have some swap space on every server (may not matter, but I prefer “safe” to “sorry”) – I covered how to do this in the etherpad tutorial

MooseFS Master

  • install MooseFS master
    • curl “http://ppa.moosefs.com/RPM-GPG-KEY-MooseFS” > /etc/pki/rpm-gpg/RPM-GPG-KEY-MooseFS && curl “http://ppa.moosefs.com/MooseFS-stable-rhsysv.repo” > /etc/yum.repos.d/MooseFS.repo && yum -y install moosefs-master moosefs-cli
  • make changes to /etc/mfs/mfsexports.cfg
    • # Allow everything but “meta”.
    • #* / rw,alldirs,maproot=0
    • 10.132.0.0/16 / rw,alldirs,maproot=0
  • add hostname entry to /etc/hosts
    • 10.132.41.59 mfsmaster
  • start master
    • service moosefs-master start
  • see how much space is available to you (none to start)
    • mfscli -SIN

MooseFS Chunk(s)

  • install MooseFS chunk
    • curl “http://ppa.moosefs.com/RPM-GPG-KEY-MooseFS” > /etc/pki/rpm-gpg/RPM-GPG-KEY-MooseFS && curl “http://ppa.moosefs.com/MooseFS-stable-rhsysv.repo” > /etc/yum.repos.d/MooseFS.repo && yum -y install moosefs-chunkserver
  • add the mfsmaster line from previous steps to /etc/hosts
    • cat >> /etc/hosts
    • 10.132.41.59 mfsmaster
    • <ctrl>-d
  • make your share directory
    • mkdir /mnt/mfschunks
  • add your freshly-made directory to the end of /etc/mfshdd.cfg, with a size you want to share
    • /mnt/mfschunks 15GiB
  • start the chunk
    • service moosefs-chunkserver start
  • on the MooseFS master, make sure your new space has become available
    • mfscli -SIN
  • repeat for as many chunks as you want to have

Pydio / MooseFS Client

  • install MooseFS client
    • curl “http://ppa.moosefs.com/RPM-GPG-KEY-MooseFS” > /etc/pki/rpm-gpg/RPM-GPG-KEY-MooseFS && curl “http://ppa.moosefs.com/MooseFS-stable-rhsysv.repo” > /etc/yum.repos.d/MooseFS.repo && yum -y install moosefs-client
  • add the mfsmaster line from previous steps to /etc/hosts
    • cat >> /etc/hosts
    • 10.132.41.59 mfsmaster
    • <ctrl>-d
  • mount MooseFS share somewhere where Pydio will be able to get to it later (we’ll use a bind mount for that in a while)
    • mfsmount /mnt/mfs -H mfsmaster
  • install Apache and PHP
    • yum -y install httpd
    • yum -y install php-common
      • you need more than this, and hopefully Apache grabs it for you – I installed Nginx then uninstalled it, which brought-in all the PHP stuff I needed (and probably stuff I didn’t)
  • modify php.ini to support large files (Pydio is exclusively a webapp for now)
    • memory_limit = 384M
    • post_max_size = 256M
    • upload_max_filesize = 200M
  • grab Pydio
    • you can use either the yum method, or the manual – I picked manual
    • curl http://hivelocity.dl.sourceforge.net/project/ajaxplorer/pydio/stable-channel/6.0.6/pydio-core-6.0.6.tar.gz
      • URL correct as of publish date of this blog post
  • extract Pydio tgz to /var/www/html
  • move everything in /var/www/html/data to /mnt/moosefs
  • bind mount /mnt/moosefs to /var/www/html/data
    • mount –bind /mnt/moosefs /var/www/html/data
  • set ownership of all Pydio files to apache:apache
    • cd /var/www/html && chown -R apache:apache *
    • note – this will give an error such as the following screen:
    • Screen Shot 2015-04-20 at 16.32.48this is “ok” – but don’t leave it like this (good enough for a how-to, not production)
  • start Pydio wizard
  • fill-in forms as they say they should be (admin, etc)
    • I picked “No DB” for this tutorial – you should use a database if you want to roll this out “for real”
  • login and starting using it

Screen Shot 2015-04-20 at 17.07.51

Now what?

Why would you want to do this? Maybe you need an in-house shared/shareable storage environment for your company / organization / school / etc. Maybe you’re just a geek who likes to play with new things. Or maybe you want to get into the reselling business, and being able to offer a redundant, clustered, cloud, on-demand type storage service is something you, or your customers, would find profitable.

Caveats of the above how-to:

  • nothing about this example is “production-level” in any manner (I used Digital Ocean droplets at the very small end of the spectrum (512M memory, 20G storage, 1 CPU))
    • there is a [somewhat outdated] sizing guide for ownCloud (pdf) that shows just how much it wants for resources in anything other than a toy deployment
    • Pydio is pretty light on its basic requirements – which also helped this how-to out
    • while MooseFS is leaner when it comes to system requirements, it still shouldn’t be nerfed by being stuck on small machines
  • you shouldn’t be managing hostnames via /etc/hosts – you should be using DNS
    • DNS settings are far more than I wanted to deal with in this tutorial
  • security has, intentionally, been ignored in this how-to
    • just like verifying your inputs is ignored in the vast majority of programming classes, I ignored security considerations (other than putting the MooseFS servers on non-public-facing IPs)
    • don’t be dumb about security – it’s a real issue, and one you need to plan-in from the very start
      • DO encrypt your file systems
      • DO ensure your passwords are complex (and used rarely)
      • DO use key-based authentication wherever possible
      • DON’T be naive
  • you should be on the mailing list for MooseFS and Pydio forum.
    • the communities are excellent, and have been extremely helpful to me, even as a lurker
  • I cannot answer more than basic questions about any of the tools used herein
  • why I picked what I picked and did it the way I did
    • I picked MooseFS because it seems the easiest to run
    • I picked Pydio because the ownCloud docs were borked for the 8.0x release on CentOS 6 – and it seems better than alternatives I could find (Seafile, etc) for this tutorial
    • I wanted to use ownCloud because it has clients for everywhere (iOS, Android, web, etc)
    • I have no affiliation with either MooseFS or Pydio beyond thinking they’re cool
    • I like learning new things and showing them off to others

Final thoughts

Please go make this better and show-off what you did that was smarter, more efficient, cheaper, faster, etc. Turn it into something you could deploy as an AMID on AWS. Or Docker containers. Or something I couldn’t imagine. Everything on this site is licensed under the CC BY 3.0 – have fun with what you find, make it awesomer, and then tell everyone else about it.

I think I’ll give LizardFS a try next time – their architecture is, diagrammatically, identical to the “pro” edition of MooseFS. And it’d be fun to have experience with more than one solution.

wait, what?

My wife has done a far more excellent write-up* than I could hope to – but the short version is that I’m now a dad 🙂

We got an out-of-the-blue call a couple weeks ago that there was a ~3 month old baby boy being put for adoption, and did we know anyone who would be able / want to adopt him.

“Anyone”? Why yes, yes we did! Us!

Fast forward to this week – after meetings with lawyers, updating our home study with our adoption agency, and more – we got The Call. The Call came that we could come to NY since birth mom was scheduled to sign all of her paperwork.

We’ve gone on hold for a few months with regards to adopting from Ethiopia while we bond with our new fella – but that’s totally cool with us 🙂


*please contact me privately for access to our other blog if you don’t remember the login