• 1 Post
  • 166 Comments
Joined 2 years ago
cake
Cake day: July 3rd, 2023

help-circle
  • I don’t think “every single problem … must be reduced down to an individual failing” is super common, but sure, some people refuse to recognize systemic problems. There are loads of people who say racism isn’t a problem, for example, and that’s bad. Kind of off topic from childhood development and people who refuse to admit fault when it is plausibly their fault. (And saying you’re late because there was traffic because the city refuses to build effective mass transit may be technically true in a sense, but it’s also kind of useless, maybe even counter productive, in the moment where everyone else is waiting for you. Leave earlier. Use the agency you have.)


  • jjjalljs@ttrpg.networktoFunny@sh.itjust.worksThe two-frame test
    link
    fedilink
    arrow-up
    39
    arrow-down
    1
    ·
    5 days ago

    A lot of people here seem stuck on the details of the metaphor instead of focusing on how some adults refuse to ever consider they are wrong or at fault, and that’s a real problem in the world. You probably know someone who never admits fault for anything. If they’re late, it’s because of traffic. If they lose in mario kart, it’s because the controller is bad. If they get lost, it’s because the GPS is hard to understand. Never their fault.






  • Many people have found that using LLMs for coding is a net negative. You end up with sloppy, vulnerable, code that you don’t understand. I’m not sure if there have been any rigorous studies about it yet, but it seems very plausible. LLMs are prone to hallucinating, so you’re going to get it telling you to import libraries that don’t exist, or use parts of the standard library that don’t exist.

    It also opens up a whole new security threat vector of squatting. If LLMs routinely try to install a library from pypi that doesn’t exist, you can create that library and have it do whatever you want. Vibe coders will then run it, and that’s game over for them.

    So yeah, you could “rigorously check” it but a. all of us are lazy and aren’t going to do that routinely (like, have you used snapshot tests?), b. it’s going to anchor you around whatever it produced, making it harder to think about other approaches, and c. it’s often slower overall than just doing a good job from the start.

    I imagine there are similar problems with analyzing large amounts of text. It doesn’t really understand anything. To verify it’s correct, you would have to read the whole thing yourself anyway.

    There are probably specialized use cases that are good- I’m told AI is useful for like protein folding and cancer detection- but that still has experts (I hope) looking at the results.

    To your point, I think people are trying to use these LLMs for things with definite answers, too. Like if I go to google and type in “largest state in the US” it uses AI. This is not a good use case.













  • I still sometimes think about the time I was exiting my apartment building, and as I passed through the outer doors some choral music started blasting from nearby. It definitely felt like the building door was a fog gate and I was about to find a large field boss on the street.

    Luckily for my low vigor and “int but i haven’t felt any spells yet” build, no boss attacked.