Earlier this month, the Modern Language Association held its annual convention, and our team hoped that we would be able to engage with attendees, helping them continue their conversations with one another via the Commons. Instead, we found ourselves fending off what initially looked like a bot attack: a massive influx of new account creation attempts with a few shared characteristics that made clear that there was orchestration involved. We put some measures in place to attempt to ensure that the majority of these attempts did not succeed, and spent several days playing whack-a-mole with the few that did.
In the process, it gradually came to seem that we might not be dealing with bots, but with humans: bad actors who were trying to find ways into the Commons community. To what end, we weren’t sure. But given the visibility of the MLA Convention, we really, really did not want to find out.
Things have gotten a bit quieter since the convention ended, but the suspicious account creation attempts continue. And fighting off this attack has taken all of the time that might have gone into the work we’re trying to do to improve and advance the platform, and it’s left our very small team exhausted. So we’re discussing some longer-term options, options that raise a few key questions we’d like to open up for discussion with the Commons community.
The most important question is this:
How do we balance our commitment to ensuring that the Commons is open to anyone — regardless of credentials, memberships, employment status, language, geographical location, and so forth — with our commitment to ensuring that the members of our community are safe and free from harassment? We’ve all seen much too graphically of late the costs of a hands-off approach to open social networks, but even within a more local academic frame of reference, we’ve seen what can happen when virtual events get Zoom-bombed or otherwise disrupted. We absolutely do not want members of our community to be threatened in any way that unsettles their ability, not to mention their willingness, to engage in the shared collaborative work that they’re undertaking here. We’re grateful that the Commons has managed to avoid such incidents up until now, but we’ve achieved a size and a visibility that has led us to become a target. As a result, we need to take action to protect the network and its members.
Should we establish some kind of verification requirement before new accounts are permitted to use some of the network’s features? We imagine that we might restrict new, unverified user accounts in ways that prevent such accounts from sending direct messages to other community members, for instance, or from creating unwelcome groups and sites within the network. This might work something like the trust levels model that Discourse uses, relying on a demonstration of good-faith engagement to gradually open up features to new accounts, though we may need something a bit lighter weight as we get started.
If we establish such a requirement, what paths toward verification should we enable? We could imagine verification happening as part of account creation if the new user uses an email address that demonstrates a connection with a trusted institution or organization, or if the new user links their account to another trustworthy scholarly data system such as ORCiD. But we also want to ensure that independent scholars and practitioners who may not have institutional credentials or established publication records can join us as well. Should we take the arXiv approach of having established members of the community vouch for new members, or does that run the risk of clubbiness? How do we preserve access for good actors while minimizing the damage that bad actors can do?
We welcome your thoughts on these questions, and we look forward to discussing the path ahead with the community as a whole.
So sorry that you/we now have to deal with this, but I think it’s inevitable, alas. I very much you like your idea above about need to verify account to use some features, as long as some features remain completely open-access. For instance, I think that all the syllabi and uploaded scholarship/PDFs should remain OA with no need for the user/downloader to provide any info (that’s such a wonderful change from Academia edu etc). But yes, people who want to run a conference or a group chat should be verified in some way.
I recall that the gatekeeping for a discussion list involved applying by supplying a short statement of research interests. The request was human-verified. The archive of the list was open to all. Posting was moderated. Humanities Commons is more complex than a discussion list.
I run a blog on HC and I am very impressed by the spam filters. I also participate in a group and given my experience getting involved in group discussion, I think one gradation of permissions for groups would be (1) read (2) comment on existing topics (3) introduce new topic. This follows an arc of involvement.
I don’t know how such an arc of involvement might be applied to messaging except in the odd way of a gradation from being only able to respond to being hailed through to being able to initiate hailing. Perhaps messaging permissions would be like those for depositing to CORE — an on or off affair.
I wonder also if being sensitive to onboarding conditions could involve the new person being assigned a buddy or a mentor while they settle in on a course to contribute to the community and groups. Not sure how this might work with those who merely wish to lurk and absorb. Also unclear what burden this poses on already stretched human volunteers.
Back in the heyday of MUDs and MOOs the wizards could ban bad actors. Maybe there are some lessons from that era to glean. It all seems so complex now. I hope these thoughts have been helpful.