As technology advances and computers become increasingly capable, the line between human and bot activity on social media platforms like Lemmy is becoming blurred.
What are your thoughts on this matter? How do you think social media platforms, particularly Lemmy, should handle advanced bots in the future?
We’re not handling the LLM generative bullshit bots now, anywhere. There’s a thing called the dead Internet theory. Essentially most of the traffic on the Internet now is bots.
https://en.m.wikipedia.org/wiki/Dead_Internet_theory
It’s not just the internet. For example, students are handing in essays straight from ChatGPT. Uni scanners flag it and the students may fail. But there is no good evidence either side, the uni side detection is unreliable (and unlikely to improve on false positives, or negatives for that matter) and it’s hard for the student to prove they did not use an LLM. Job seekers send in LLM generated letters. Consultants probably give LLM based reports to clients. We’re doomed.
Hardly. Just do away with coursework and stick to in-person exams and orals.
Spoken by someone who has never felt with a learning dissability
You can still have extra allotted time, or be provided a wiped computer or tablet. Colleges dealt with these disabilities before llms