Two of the main principles of the fediverse are decentralization and federation. The fediverse is made up of many different servers or instances running different software and managed by different people governed by different rules. For this to be usable, the different instances of the fediverse exchange information so the content of one is accessible from the others, so it is not necessary to create an account on each server to be able to interact with the entire fediverse.
Sometimes, when the disagreement is not very serious (for example, sexually explicit content without marking it as sensitive), the visibility of the server considered offensive is simply limited so that its content is not directly visible in the instance that blocks it, but its users are still allowed to interact with each other if they wish. In more serious cases (for example, servers that allow harassment or illegal content), the blocking can be total without allowing users to interact between both instances.
Where the limit should be placed on when a server should be blocked is of course an open question which is hardly solvable.
Recently, some instances announced their intention of blocking some of the largest or oldest ones arguing that they hosted Nazi or fascist accounts and therefore concluded that these instances were permissive with those ideologies.
I probably don’t know the full context of how this situation has come to be, but this accusation that some of the largest instances allow Nazi accounts is something that I have seen on more occasions apart from this specific case.
At the same time, it also announced that some smaller instances, but one of the oldest ones, would be blocked because they hosted an account of an alleged fascist each, which is already problematic because it is a term that has been stretched so much that it has almost lost its meaning to the point that some people use it for almost everyone who thinks differently from them (yes, the irony), and that could be part of the problem too when it is said that some instance “allows fascist accounts”. But that’s not what I want to talk about now, so let’s assume that yes, those accounts were objectively fascist and unacceptable to almost everyone, especially since I don’t know the background of those particular accounts.
The instances that announced their intention to block argued that these other ones should be blocked because the presence of those accounts evidenced the permissiveness of those instance with fascism. However, when the administrator of the instances to be blocked found out about the existence of those accounts and their background, they proceeded to delete them.
And that’s the point. If we find Nazi accounts or some other type of awful accounts in an instance, it will be rushed if we jump to the conclusion that it’s because that instance is permissive with that type of account and it is more likely that the administrators of that instance are not aware of the existence of those accounts and a report o the administrators may resolve the problem.
On the other hand, I guess it will make sense to everyone that the largest instances are where these problematic accounts are found the most. It is a mere mathematical question. The servers that have the most users are going to be the ones with the most unacceptable accounts and also the most news accounts, the most bots or the most kitty meme accounts and also where it’s harder to have all the existing accounts under control.
Some argue that therefore the solution should be many small instances instead of the largest ones. This way, the administrators of those smaller instances can better control what happens in their instances. Actually, some small instances announced their intention to federate only with small instances since they argue that there should be only many little ones and no big ones.
I think this just moves the problem to a different place. If the size of the instances is reduced, administrators will indeed be able to better control who enters their instances. But if you want to reduce the size of the instances while keeping the same number of users, you will have to increase the number of instances significantly and it will be more difficult for an administrator to control that among the instances they are federating with there isn’t some that actually tolerate accounts with Nazi content or even have been created specifically for that purpose.
I see it basically as a matter of choosing whether we want Nazis and other awful accounts to break into large instances where their admins aren’t able to control all the users joining or into small ones that may have been created for that purpose because there are so many that it is impossible to control how many instances we federate with. Currently, the official Mastodon page reports over 6000 instances of Mastodon (and that’s not even the entire fediverse) and those who believe the problem is with large instances want even more instances to be created. Do we really believe that we can control that some awful ones are not slipping among all those many? However, Nazi accounts —if there are any, because that’s what they talk about but I’ve never seen any, I suspect this is a tiny problem being magnified— are mostly found on the big servers because it’s where they’re searched since it’s easier to search and find them across a million users in a single instance than across thousands of instances.
If awful accounts are still going to exist either by slipping through the larger servers or on their own smaller ones, what’s the solution then? I’m afraid there isn’t. Or none that I know. This is not an article to propose solutions.
The awful accounts are going to keep popping up. Whether in large instances of the fediverse, in small ones or in centralized networks such as Twitter and Facebook. Because it is not something that can be prevented and you can only act after the fact once that account has started to generate content and someone has identified it as undesirable. Therefore, we also cannot assume that if an instance hosts awful accounts, it is with the consent of its administrators and not simply because they have not been made aware of the existence of those accounts. And I think the only thing we can do about it is to support the moderators, report abusive accounts, and continue to build the fediverso with the idea of making it more difficult for harmful behaviour.