Freedom is the right to tell people what they do not want to hear.

  • George Orwell
  • 1 Post
  • 16 Comments
Joined 11 days ago
cake
Cake day: July 17th, 2025

help-circle
  • Most definitions are imperfect - that’s why I said the term AI, at its simplest, refers to a system capable of performing any cognitive task typically done by humans. Doing things faster, or even doing things humans can’t do at all, doesn’t conflict with that definition.

    Humans are unarguably generally intelligent, so it’s only natural that we use “human-level intelligence” as the benchmark when talking about general intelligence. But personally, I think that benchmark is a red herring. Even if an AI system isn’t any smarter than we are, its memory and processing capabilities would still be vastly superior. That alone would allow it to immediately surpass the “human-level” threshold and enter the realm of Artificial Superintelligence (ASI).

    As for something like making a sandwich - that’s a task for robotics, not AI. We’re talking about cognitive capabilities here.


  • “Understanding requires awareness” isn’t some settled fact - it’s just something you’ve asserted. There’s plenty of debate around what understanding even is, especially in AI, and awareness or consciousness is not a prerequisite in most definitions. Systems can model, translate, infer, and apply concepts without being “aware” of anything - just like humans often do things without conscious thought.

    You don’t need to be self-aware to understand that a sentence is grammatically incorrect or that one molecule binds better than another. It’s fine to critique the hype around AI - a lot of it is overblown - but slipping in homemade definitions like that just muddies the waters.



  • So… not intelligent.

    But they are intelligent - just not in the way people tend to think.

    There’s nothing inherently wrong with avoiding certain terminology, but I’d caution against deliberately using incorrect terms, because that only opens the door to more confusion. It might help when explaining something one-on-one in private, but in an online discussion with a broad audience, you should be precise with your choice of words. Otherwise, you end up with what looks like disagreement, when in reality it’s just people talking past each other - using the same terms but with completely different interpretations.




  • If you’re talking about LLMs, then you’re judging the tool by the wrong metric. They’re not designed to solve problems or pass captchas - they’re designed to generate coherent, natural-sounding text. That’s the task they’re trained for, and that’s where their narrow intelligence lies.

    The fact that people expect factual accuracy or problem-solving ability is a mismatch between expectations and design - not a failure of the system itself. You’re blaming the hammer for not turning screws.




  • You’re describing intelligence more like a soul than a system - something that must question, create, and will things into existence. But that’s a human ideal, not a scientific definition. In practice, intelligence is the ability to solve problems, generalize across contexts, and adapt to novel inputs. LLMs and chess engines both do that - they just do it without a sense of self.

    A calculator doesn’t qualify because it runs “fixed code” with no learning or generalization. There’s no flexibility to it. It can’t adapt.







  • I’m pretty happy having four distinct seasons. I don’t like winter and snow at all, but I think suffering through six months of cold and darkness is exactly why the warmth and sunshine feel so damn good when summer finally comes. Also, with climate change, the climate where I live has - so far - only been getting better. I’m not saying it’s good overall, but it’s not all bad either.