A really interesting look at the recent spam wave.
Takeaways
All pulled from the analysis, emphases are mine:
- Many Fediverse instances have open sign-ups without proper limits, enabling this to even happen in the first place.
- Open registrations should NEVER be enabled on instances without proper protections and monitoring.
- It’s important to note that this attack doesn’t require any novel exploit, just the existence of unmonitored, un-protected instances with open registration. From what we’ve seen, these are usually smaller instances.
- If you must have open registrations on your instance, use the proper anti-spam and anti-bot mechanisms. We also recommend blocking sign-ups using Tor IP addresses and temporary email domains.
hypothetically, what stops a spam group from creating their own instance to register accounts on, or several such? It’d get defederated quickly once the attack got going, sure, but it would take time for this to get done, and in the meantime the spam gets in
Why use your own resources when you can use someone else’s?
I don’t think really anything, it just takes more effort and they’d need to change the domain every time they get blocked. I have seen a few services hosted solely for spam and bad faith practices, though they were Mastodon, Plemora, and Kbin servers, not Lemmy.
It’s probably more expensive and inconvenient.
Also it might only take one report for an active mod team to ban a server. How long can that take? An hour? Less? If they’re on servers that real people use, bots have to be banned one by one, so the spam can last a lot longer and reach more people.
amex2189 disappears(?) after 72 hours, possibly because of:
- Parents being notified of this event, and confiscating his devices.
Kek.
The takeaways are great
I haven’t read it all yet, but I noticed a bit about pressing charges.
With decentralized social media, currently there is not a risk of some big social media company coming after you when you cause damages.
It doesn’t have to stay that way though. What might coordinated legal action look like for the fediverse? They caused a LOT of harm to a lot of people, even if we’re just looking at server costs and time spent by volunteers to clean up the mess.
A first step is RBL intergration, a shared blocklist of spam instances that subscribed instances would use to blackhole spam users/traffic/instances. These are used ubiquitously in email spam systems, so there is a precedent in federation systems for it working. We need to stand up an RBL, and then mod Lemmys federation system to work automatically based on the community blocklist.
It does mean that poorly admined instances will get blackholed, breaking their federation, but that’s the cost of a healthy network.
deleted by creator
This is how email servers have worked for decades - there is no silver bullet and this comes closest. If you poorly admin your email instance, say allowing it to be an open relay (same as just allowing open registrations), you get blacklisted everywhere aka defederated. Same if you have a compromise and someone starts spamming out.
how did it take this long to get spam on the fediverse it’s basically an open canvas for wet crap
There have been other waves, it’s just that once they get shut down everybody loses interest and moves on. The PR for the one of the changes Mastodon just made was implemented in May 2023 after the Doge spam wave. And here’s a June 2019 post talking about exactly the same kind of attack: “The problem we are experiencing is the spammer signing up on random open instances and sending spam remotely.”