Along the difficulties of initially building a good group/community, comes the hassles of managing said [virtual] community – especially on the book of the face.
I am a coadmin on the Ontario & Western Railways Historical Society Inc Facebook group. My friend Peter is a coadmin of the Linux Mint group.
Something both of us have noticed is the ridiculous spam problem Facebook groups have developed over the past 1-2 years. It’s not a new problem, of course – Stack Overflow has had problems since very early on, too: they printed A Theory of Moderation to outline the issues they were seeing, and how they planned to handle it.
The real problem at the root of all the spam lies, though, not in technology, but in people.
Even with active community self-regulation, moderators occasionally need to intervene. Moderators are human exception handlers, there to deal with those (hopefully rare) exceptional conditions that should not normally happen, but when they do, they can bring your entire community to a screaming halt – if you don’t have human exception handling in place.
Spam doesn’t arise on its own – it’s all developed by people. Until the people problem of spam can be addressed, it will continue. Sadly, technology, in and of itself, cannot deal with the people problem.
So instead we have human admins and moderators whose [typically volunteer] job is to ensure that the communit[y|ies] keeps to a general standard, as defined by the community itself. By assuming technology could be made that would fix the problem, we’re asking the wrong question: human behavior needs to be addressed and improved; while technology is wonderful and can aid in the process, it is no panacea.
Encouragements for moderation teams can come in the form of gamification (the SO model), community accolade, or just the individual admin’s personal satisfaction.
The drawback is that this task can become so overwhelming at times and in places that it those tasked with caring for the community, when the community itself won’t do anything about the problem(s), give up because they adopt the view that it’s everyone’s problem, and presume that since it is everyone’s problem, it’s not “theirs”.
What are the solutions to these issues? I can think of a few – but many remain yet unanswered:
- the community must encourage the admins
- if the community isn’t doing something to make their admins feel appreciated, the admins will, eventually, leave
- better tech
- it’s not possible to solve all problems with technology, but there are certainly many areas that can be improved in this regard
- community engagement and education
- seasoned community members and admins alike need to take the time to “mentor” new community members to make sure they stick to the guidelines of that community
- community members need to be proactive in assisting the moderators when inappropriate items are posted, or conversation degrades below the stands of the group
- a willingness to say “no”
- admins and the general community needs to be willing to tell some people they are not welcome
- this should [almost] never be in a hateful, grudge-bearing manner, but it must be done to ensure the integrity of the community in the long-term
- a willingness to morph
- the flip side of (4) is that the community needs to be willing on a regular basis:
- review its own guidelines
- change / modify rules
- find new admins
- welcome new members who aren’t yet versed in the ways of the group (related to (3) above)
- the flip side of (4) is that the community needs to be willing on a regular basis:
I am sure there are many many more items that can be added to this list. But this is the starting point for every successfully-maintained community I’ve ever seen.
What others would you add, or what would you change?
I think you need something that makes spam and crappy posts the user’s problem. Adressing spam and off-topic posts at an individual level could create an incentive to keep quality high, and avoids the “it’s not my problem” mindset by encouraging everybody to make sure they’re contributing good stuff.
Gamification mechanics help deal with this if you want to go that route (lowering of a user’s “score” when they post crap), but you can also use limit how often people post if they’ve become a problem or use something like Slashdot’s system of collapsing comments on an article so only the relatively highest quality stuff is visible and the lowest quality stuff is hidden by default.
Another thing that would help is a clear, concise statement of “Here’s who and what we are, and who and what we are not” (sort of like how StackExchange makes it clear they’re not there for discussions, but strictly questions and answers). That’s something that people can point to when posts start hitting tangents or getting spammy. It also helps instruct users on what’s considered “good” behavior in the community, and makes sure your community membership is what you want it to be (Want to do this thing we’re not about? Maybe we’re not the best fit for you right now). When you talk about revisiting your rules/guidelines, odds are that statement is what’s up for debate more so than the minutia of various policies meant to enforce that overarching principle.
Absolutely yes: community standards need to be prominent, and clear 🙂
But, as you point out re:Stack Overflow, just because those standards are in place, doesn’t inherently guarantee they will be followed 🙁
No, but with Stack the consequences of violating community standards are passed down to the level of people violating them – I lose rep for posting bad questions and/or answers (which effects what I can do on the site), and really bad questions are closed altogether.
There’s also hell banning for the more incorrigible I suppose: http://en.m.wikipedia.org/wiki/Hellbanning (you may want to re-label it based on your community).