This problem is hardly an issue on this platform.
And this is the problem.
I see objectively misleading, clickbait headlines and articles from bad (eg not recommended by Wikipedia) sources float to the top of Lemmy all the time.
I call them out, but it seems mods are uninterested in enforcing more strict information hygiene.
Step 1 is teaching journalism and social media hygiene as a dedicated class in school, or on social media… And, well, the US is kinda past that being possible :/.
There might be hope for the rest of the world.
In US English classes at any level above middle school, the importance of finding valid sources and providing citations is emphasized, although that’s mainly for essays and the like.
I could imagine it would be possible to adapt that mindset towards social media as well. Provide your sources, so you can prove you understand what you are saying. The foundations are there, they just need to be applied.
Except there are plenty of “sources” that spew even more BS. We can’t even trust what comes out of our government anymore (by design).
That’s true, but from what I remember, half the class was either goofing off, sleeping, or straight up not there. Education as a whole isn’t valued in the US anymore. Students/parents blame teachers when their kid doesn’t magically absorb the information without doing any of the work or studying. Trade schools are becoming more popular because of the costs of college, but deep down, they think it’s an easy way to make good money. Those trades require hard work as well. Cost of college is most definitely contributed to the overall lack of education but that’s not causing the average US high schooler to have a reading level of a 5th grader in the UK.
bad (eg not recommended by Wikipedia)
If you want to know why misinformation is so prominent, the fact that you think this is a good standard is a big part of it.
Step 1 is teaching journalism and social media hygiene as a dedicated class in school
And will those classes be teaching “Wikipedia is the indisputable rock of factuality, the holy Scripture from which truth flows”?
Hey, just wanted to say I’m always grateful when someone calls out posts not linking to proper sources. Your doing good work, thanks!
Note that Wikipedia is not a proper source.
Most of the misinformation I regularly find on top are statements made by the US president or his administration – and these are news reports in an appropriate context with appropriate commentary by Lemmy users. Occasionally, very rarely, I have also seen misinformation about the US president, but I don’t see that as much of a problem.
Rather, I see it as a very serious problem that the US president himself and his administration are massively spreading misinformation. That is what my question refers to.
With no offense/singling out intended, this is what I’m talking about.
You (and many others) are interested in misinformation from MAGA, but not from misreported news on MAGA. But it’s these little nuggets that his media ecosystem pounces on and has gotten Trump to where is.
And it’s exactly the same on the “other side.” The MAGA audience is combing the greater news ecosystem for misinformation like a hawk while turning a blind eye to their own.
The answer is for everyone to have better information hygiene, and that includes shooting misleading down story headlines one might otherwise like. It means being critical of your own information stream as you read.
So you think it’s okay for the US president to spread misinformation? You really don’t see a problem with that, even though you yourself talk about “information hygiene”?
There’s buckets of wrong information on Lemmy mate, no question
yeah, lemmy could stop pushing extreme leftist misinformation from mysterious online “news” sources and rewriting history that would be a great start
Yeah, western right wing neoliberal misinformation only.
step 1. misinformation is a problem on every platform. full stop.
I think what you mean is maliciously manufactured information. still, I believe Lemmy is subject to it.
I believe that both types can be effectively dispatched by effectively moderating the community, but not in the sense that you might be thinking.
I believe that we are looking at community moderation from the wrong direction. today, the goal of the mod is to prune and remove undesired content and users. this creates high overhead and operational costs. it also increases chances for corruption and community instability. look no further than Reddit and lemmy for this where we have a handful of mods that are in-charge of multiple communities. who put them there? how do you remove them should they no longer have the communities best interests in mind? what power do I have as a user to bring attention to corruption?
I believe that if we flip the role of moderators to be instead guardians of what the community accepts instead of what they can see it greatly reduces the strain on mods and increases community involvement.
we already use a mechanism of up/down vote. should content hit a threshold below community standards, it’s removed from view. should that user continue to receive below par results from inside the community, they are silenced. these par grades are rolling, so they would be able to interact within the community again after some time but continued abuse of the community could result in permanent silencing. should a user be unjustly silenced due to abuse, mod intervention is necessary. this would then flag the downvoters for abuse demerits and once a demerit threshold is hit, are silenced.
notice I keep saying silenced instead of blocked? that’s because we shouldn’t block their access to content or the community or even let them know nobody is seeing their content. in the case of malicious users/bots. the more time wasted on screaming into a void the less time wasted on corrupting another community. in-fact, I propose we allow these silenced users to interact with each other where they can continue to toxify and abuse each other in a spiraling chain of abuse that eventually results in their permanent silencing. all the while, the community governs itself and the users hum along unaware of what’s going on in the background.
IMO it’s up to the community to decide what is and isn’t acceptable and mods are simply users within that community and are mechanisms to ensure voting abuse is kept in check.
Great idea but tough to keep people from gaming it
Lol misinformation is still an issue on Lemmy, don’t kid yourself
Hardly an issue on Lemmy?
Or does it just feel that way when everyone around you has the same views?
I just wanna know: What do you do when talking to a friend IRL, face to face, and they tell you something that isn’t true?
While there may aftually be people trying to push an agenda, I suspect 90% or more people who “spread misinformation online” are just regular old idiots.
People don’t suddenly stop being people just because they have a computer and anonimity. And a lot of people are just misinformed.
Best way to stop misinformation online? Same as it is offline: Through better fucking education.
I say “huh. I hadn’t heard that one. Let me look it up. … Ohh no, that turned out to be fake. It’s getting so hard to tell these days. Just the other day I was reading…” And then start rambling about another topic. It prevents them from sitting with the uncomfortable feeling of being an idiot.
Media literacy is an old and important topic. Are you asking for an introduction to it?
I’ve tried a lot and the problem is that the people are entrenched in their beliefs. They are in irrational states of mind on social media, and you can’t rationally talk to people in that state of mind.
The most successful I’ve had is simply the Socratic method. Remain calm, simply ask open ended questions which are designed to just make them question their tightly held beliefs. Why are cities less safe, why do you feel this, etc. however even I find they will often just get angry at that even.
Ultimately, it’s not social media which will win minds. It’s in the open. I’ve had more luck meeting people casually in bars and talking to them vs on a keyboard
Unfortunately, I believe that social media does influence people’s decisions very much. If that weren’t the case, criminals like Trump could never be elected president, and 20-25% of the people in my home country wouldn’t vote for open Nazis.
Nevertheless, thank you for your valuable contribution: In addition to technical possibilities, I am also interested in how to deal with people who do not accept rational arguments - the Socratic method is probably the best way to make a point with them.
I’ve heard this method as a way to combat racism and transphobia as well (which I guess are based on misinformation). Most of the time people are just repeating what they’ve heard so it’s good to get them to think about why they believe it, even if it doesn’t fully change their mind.
[misinformation] is hardly an issue on this platform […]
In my opinion, that statement of yours is, ironically, responsible for why there may be an issue with misinformation. You state it with certainty, yet you provide no source to back up your claim. It is my belief that this sort of conjecture is at the source of misinformation issues.
This problem is hardly an issue on this platform.
LOLOL – This platform is just as bad as Reddit for misinformation. It’s usually silly shit, but it’s almost always 90% truth laced with 10% lie. The fact that you believe it’s somehow immune to this is just testament to how hard it is for people to see this kind of thing clearly when it’s “on their side”. Problem is, any time it’s called out, people get massively downvoted for it, so people have stopped calling it out.
Do you have any examples?
As a mod for a couple of the biggest communities… gestures to everything
Easily the one I see the most is Trump talking about “they rigged the election and now I’m here.” – I’m pointing out this one specifically, because any dunderhead dipshit knows from context what he’s talking about, but lemmy absolutely dives into the shallow end with it…
He’s clearly making the claim that Dems rigged the 2020 election, and because of that, he’s president in 2024 when … I dunno - whatever 2 events are happening. (Fifa or some shit?) But EVERY fucking time on Lemmy it’s like “See he’s admitting he rigged the election!” and everyone just meep meeps into agreeance.
That’s just one off the top of my head, and that’s with blocking most politics-based subs. If lemmy can’t even read or gather context from a sentence correctly – There’s no hope for the world.
Could the lemmings be referring to the old trope where some loudmouth (usually a conservative) bangs on about an issue with some minority group ad nauseum and then some time later it turns out they were actually a perpetrator of the thing they banged on about, ie every accusation is an admission of guilt?
What concrete steps can be taken to combat misinformation on social media? […]
Regarding my own content: I do my best to cite any claim that I make, no matter how trivial. If I make a statement for which I lack confidence in its veracity, I do my best to convey that uncertainty. I do my best to convey explicitly whether a statement is a joke, or sarcasm.
Fundamentally, my approach to this issue is based on this quote:
Rationality is not a character trait, it’s a process. If you fool yourself into believing that you’re rational by default, you open yourself up to the most irrational thinking. [1]
Regarding the content of others: If I come across something that I believe to be false, I try to politely respond to it with a sufficiently and honestly cited statement explaining why I think it is false. If I come across something of unknown veracity/clarity, I try to politely challenge the individual responsible to clarify their intent/meaning.
For clarity, I have no evidence to support that what I’m doing is an effective means to this end, but I want to believe that it’s helping in at least some small way.
References
- Type: Comment. Author: “@The8BitPianist”. Publisher: [Type: Post (Video). Title: “On These Questions, Smarter People Do Worse”. Author: “Veritasium” (“@veritasium”). Publisher: YouTube. Published: 2024-11-04T16:48:03Z. URI: https://www.youtube.com/watch?v=zB_OApdxcno.]. Published: 2024-11-04T09:06:26Z. Accessed: 2025-03-29T07:48Z. URI: https://www.youtube.com/watch?v=zB_OApdxcno&lc=Ugy6vV7Z3EeFHkdfbHl4AaABAg.
What concrete steps can be taken to combat misinformation on social media?
Free speech. The only way to combat bad ideas is with better ones.
If we want to go the route of the Responsibility of the Individual: Resolve to not get your political etc. news from social media. Draw a line for yourself: cool to get gaming news from random influencers online? Probably. News about global events? At this point might be better for most people’s mental health to ignore them and focus more locally. However, read how to read a book, make your best effort at finding a reputable news organization and check those for news if you must have them. On same vein, if you don’t read at least some article about an event being discussed on social media, DON’T COMMENT. Don’t engage with that post. If it really grabs at you, go find an article about it from a trusted source, and depending on how much it animates you, try to get a bigger picture of the event. Assume that vast majority of ALL CONTENT online is currently incentivized to engage you - to capture your attention, which is actually the most valuable asset you have. Where you put your attention will define how you feel about your life. It’s highly advicable to put it where you feel love.
Responsibility of the Collective: Moving in hierarchies, we can start demanding that social media moderators (or whatever passes for those in any given site) prevent misinformation as much as possible. Try to only join communities that have mods that do this. Failing that, demand social media platforms prevent misinformation. Failing that, we can demand the government does more to prevent misinformation. All of those solutions have significant issues, one of them being they are all very incentivized to capture the attenttion of as many people as possible. Doesn’t matter what the exact motivation is - it could be a geneinly good one. A news organization uses social media tactics to get the views so that their actually very factual and dilligently compiled articles get the spread. Or, they could be looking to drive their political agenda - which they necessarily do anyway because desire to be factual and as neutral as possible is a stance as well. One that may run afoul of the interests of some government that doesn’t value freedom of press - which is very dangerous and you need to think hard for yourself how you feel about the idea of the government limiting what kind of information you can access. For the purposes of making this shorter, you can regard massive social media platforms as virtual governments too. In fact, it would be a good idea in general.
The thing with misinformation is that many people who talk about it subtly think that they are above it themselves. They’re thinking that they know they’re not subject to propaganda and manipulation but it’s the other poor fools that need to be protected from it. It’s the Qanon and Antivaxxers. But you know better, you know how to dig deeper into massively complicated global topics and find out what the true and right opinion about them is. You can’t. Not even if we weren’t in the middle of multiple fucking information wars. You’d do well to focus on what you can know for sure, in your own experience. If you don’t like the idea of individual responsibility though, because “most people aren’t going to do it” - your best bet at getting a collective response is a group of individuals coming together under the same ideal. It’ll happen sooner or later anyway and there’s going to be plenty of suffering before either way.
we can start demanding that social media moderators (or whatever passes for those in any given site) prevent misinformation as much as possible.
Yeah, but how are you expecting moderators to determine what is and isn’t misinformation?
That’s one of the many issues with expecting a collective resolution. Question is: why do people feel they need to be able to discuss issues way beyond their understanding and personal experience online with others who also don’t know much about it? If actually done well, moderation is a full time job but nobody is interested in paying a bunch of online jannies to clean their space.
That’s why I favor individual responsibility, and opting out of the possibility of being exposed to (or perpetuating) misinformation. Maybe in the future we can have forums for verified experts of a field, where regular people can have discussions with them and ask questions etc. But these would be moderated places where you do need to bring proof and sound arguments, not emotionally charged headlines.
The stories and information posted on social media are artistic works of fiction and falsehood. Only a fool would take anything posted as fact.
[…] read how to read a book […]
Thank you for the recommendation 😊
Linking to sources, that is a big one. Even something as honest as “I read it off this Wikipedia page [link]” goes a long way in showing that the poster is not pulling an idea out of their ass.
I will always prefer having debates where both sides cite their information, even if there isn’t a satisfying agreement at the end. Plus, faulty sources can be debunked when more eyes are able to scrutinize it.
Misinformation is part of the nature of social media and can’t be fixed. Stupid people are stupid. There are A LOT of them on social media. The dishonest take advantage of the stupid to spread misinformation. The only way to counteract it is to have gatekeeping, which will crush the user count and block out the biggest users, and network effect will funnel most of the rest into the biggest. (i.e. the one with the most lenient gatekeeping)
The only hope is that people realize how stupid, unrepresentative, and unsuitable social media discourse is. It’s a place to find funny pictures of cats and boobs. Looking to it for anything serious, or pretending what you see there is representative of anything, is pointless at best and likely harmful.
It’s a pretty regulaely a big problem here.
But to answer your question, just check sources, verify with a second outlet, and call it out when you see it. That’s all you can do on an individual level.