Earlier this month, the Modern Language Association held its annual convention, and our team hoped that we would be able to engage with attendees, helping them continue their conversations with one another via the Commons. Instead, we found ourselves fending off what initially looked like a bot attack: a massive influx of new account creation attempts with a few shared characteristics that made clear that there was orchestration involved. We put some measures in place to attempt to ensure that the majority of these attempts did not succeed, and spent several days playing whack-a-mole with the few that did.
In the process, it gradually came to seem that we might not be dealing with bots, but with humans: bad actors who were trying to find ways into the Commons community. To what end, we weren’t sure. But given the visibility of the MLA Convention, we really, really did not want to find out.
Things have gotten a bit quieter since the convention ended, but the suspicious account creation attempts continue. And fighting off this attack has taken all of the time that might have gone into the work we’re trying to do to improve and advance the platform, and it’s left our very small team exhausted. So we’re discussing some longer-term options, options that raise a few key questions we’d like to open up for discussion with the Commons community.
The most important question is this:
How do we balance our commitment to ensuring that the Commons is open to anyone — regardless of credentials, memberships, employment status, language, geographical location, and so forth — with our commitment to ensuring that the members of our community are safe and free from harassment? We’ve all seen much too graphically of late the costs of a hands-off approach to open social networks, but even within a more local academic frame of reference, we’ve seen what can happen when virtual events get Zoom-bombed or otherwise disrupted. We absolutely do not want members of our community to be threatened in any way that unsettles their ability, not to mention their willingness, to engage in the shared collaborative work that they’re undertaking here. We’re grateful that the Commons has managed to avoid such incidents up until now, but we’ve achieved a size and a visibility that has led us to become a target. As a result, we need to take action to protect the network and its members.
Should we establish some kind of verification requirement before new accounts are permitted to use some of the network’s features? We imagine that we might restrict new, unverified user accounts in ways that prevent such accounts from sending direct messages to other community members, for instance, or from creating unwelcome groups and sites within the network. This might work something like the trust levels model that Discourse uses, relying on a demonstration of good-faith engagement to gradually open up features to new accounts, though we may need something a bit lighter weight as we get started.
If we establish such a requirement, what paths toward verification should we enable? We could imagine verification happening as part of account creation if the new user uses an email address that demonstrates a connection with a trusted institution or organization, or if the new user links their account to another trustworthy scholarly data system such as ORCiD. But we also want to ensure that independent scholars and practitioners who may not have institutional credentials or established publication records can join us as well. Should we take the arXiv approach of having established members of the community vouch for new members, or does that run the risk of clubbiness? How do we preserve access for good actors while minimizing the damage that bad actors can do?
We welcome your thoughts on these questions, and we look forward to discussing the path ahead with the community as a whole.