Am I the only one getting agitated by the word AI (Artificial Intelligence)?

Real AI does not exist yet,
atm we only have LLMs (Large Language Models),
which do not think on their own,
but pass turing tests
(fool humans into thinking that they can think).

Imo AI is just a marketing buzzword,
created by rich capitalistic a-holes,
who already invested in LLM stocks,
and now are looking for a profit.

  • PrinceWith999Enemies@lemmy.world
    link
    fedilink
    arrow-up
    52
    arrow-down
    3
    ·
    10 months ago

    I’d like to offer a different perspective. I’m a grey beard who remembers the AI Winter, when the term had so over promised and under delivered (think expert systems and some of the work of Minsky) that using the term was a guarantee your project would not be funded. That’s when the terms like “machine learning” and “intelligent systems” started to come into fashion.

    The best quote I can recall on AI ran along the lines of “AI is no more artificial intelligence than airplanes are doing artificial flight.” We do not have a general AI yet, and if Commander Data is your minimum bar for what constitutes AI, you’re absolutely right, and you can define it however you please.

    What we do have are complex adaptive systems capable of learning and problem solving in complex problem spaces. Some are motivated by biological models, some are purely mathematical, and some are a mishmash of both. Some of them are complex enough that we’re still trying to figure out how they work.

    And, yes, we have reached another peak in the AI hype - you’re certainly not wrong there. But what do you call a robot that teaches itself how to walk, like they were doing 20 years ago at MIT? That’s intelligence, in my book.

    My point is that intelligence - biological or artificial - exists on a continuum. It’s not a Boolean property a system either has or doesn’t have. We wouldn’t call a dog unintelligent because it can’t play chess, or a human unintelligent because they never learned calculus. Are viruses intelligent? That’s kind of a grey area that I could argue from either side. But I believe that Daniel Dennett argued that we could consider a paramecium intelligent. Iirc, he even used it to illustrate “free will,” although I completely reject that interpretation. But it does have behaviors that it learned over evolutionary time, and so in that sense we could say it exhibits intelligence. On the other hand, if you’re going to use Richard Feynman as your definition of intelligence, then most of us are going to be in trouble.

    • NABDad@lemmy.world
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      1
      ·
      10 months ago

      My AI professor back in the early 90’s made the point that what we think of as fairly routine was considered the realm of AI just a few years earlier.

      I think that’s always the way. The things that seem impossible to do with computers are labeled as AI, then when the problems are solved, we don’t figure we’ve created AI, just that we solved that problem so it doesn’t seem as big a deal anymore.

      LLMs got hyped up, but I still think there’s a good chance they will just be a thing we use, and the AI goal posts will move again.

      • ℕ𝕖𝕞𝕠@midwest.social
        link
        fedilink
        arrow-up
        8
        arrow-down
        1
        ·
        10 months ago

        I remember when I was in college, and the big problems in AI were speech-to-text and image recognition. They were both solved within a few years.

    • Pipoca@lemmy.world
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      10 months ago

      Exactly.

      AI, as a term, was coined in the mid-50s by a computer scientist, John McCarthy. Yes, that John McCarthy, the one who invented LISP and helped develop Algol 60.

      It’s been a marketing buzzword for generations, born out of the initial optimism that AI tasks would end up being pretty easy to figure out. AI has primarily referred to narrow AI for decades and decades.

    • Rikj000@discuss.tchncs.deOP
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      3
      ·
      10 months ago

      But what do you call a robot that teaches itself how to walk

      In it’s current state,
      I’d call it ML (Machine Learning)

      A human defines the desired outcome,
      and the technology “learns itself” to reach that desired outcome in a brute-force fashion (through millions of failed attempts, slightly inproving itself upon each epoch/iteration), until the desired outcome defined by the human has been met.

        • rambaroo@lemmy.world
          link
          fedilink
          arrow-up
          5
          arrow-down
          1
          ·
          10 months ago

          A baby isn’t just learning to walk. It also makes its own decisions constantly and has emotions. An LLM is not an intelligence no matter how hard you try to argue that it is. Just because the term has been used for a long time didn’t mean it’s ever been used correctly.

          It’s actually stunning to me that people are so hyped on LLM bullshit that they’re trying to argue it comes anywhere close to a sentient being.

          • Blueberrydreamer@lemmynsfw.com
            link
            fedilink
            arrow-up
            1
            arrow-down
            2
            ·
            10 months ago

            You completely missed my point obviously. I’m trying to get you to consider what “intelligence” actually means. Is intelligence the ability to learn? Make decisions? Have feelings? Outside of humans, what else possesses your definition of intelligence? Parrots? Mice? Spiders?

            I’m not comparing LLMs to human complexity, nor do I particularly give a shit about them in my daily life. I’m just trying to get you to actually examine your definition of intelligence, as you seem to use something specific that most of our society doesn’t.

      • 0ops@lemm.ee
        link
        fedilink
        arrow-up
        1
        ·
        10 months ago

        To be fair, I think we underestimate just how brute-force our intelligence developed. We as a species have been evolving since single-celled organisms, mutation by mutation over billions of years, and then as individuals our nervous systems have been collecting data from dozens of senses (including hormone receptors) 24/7 since embryo. So before we were even born, we had some surface-level intuition for the laws of physics and the control of our bodies. The robot is essentially starting from square 1. It didn’t get to practice kicking Mom in the liver for 9 months - we take it for granted, but that’s a transferable skill.

        Granted, this is not exactly analogous to how a neural network is trained, but I don’t think it’s wise to assume that there’s something “magic” in us like a “soul”, when the difference between biological and digital neural networks could be explained by our “richer” ways of interacting with the environment (a body with senses and mobility, rather than a token/image parser) and the need for a few more years/decades of incremental improvements to the models and hardware

      • PrinceWith999Enemies@lemmy.world
        link
        fedilink
        arrow-up
        2
        arrow-down
        4
        ·
        10 months ago

        So what do you call it when a newborn deer learns to walk? Is that “deer learning?”

        I’d like to hear more about your idea of a “desired outcome” and how it applies to a single celled organism or a goldfish.

    • Fedizen@lemmy.world
      link
      fedilink
      arrow-up
      5
      arrow-down
      4
      ·
      edit-2
      10 months ago

      on the other hand calculators can do things more quickly than humans, this doesn’t mean they’re intelligent or even on the intelligence spectrum. They take an input and provide and output.

      The idea of applying intelligence to a calculator is kind of silly. This is why I still prefer words like “algorithms” to “AI” as its not making a “decision”. Its making a calculation, its just making it very fast based on a model and is prompt driven.

      Actual intelligence doesn’t just shut off the moment its prompted response ends - it keeps going.

      • PrinceWith999Enemies@lemmy.world
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        10 months ago

        I think we’re misaligned on two things. First, I’m not saying doing something quicker than a human can is what comprises “intelligence.” There’s an uncountable number of things that can do some function faster than a human brain, including components of human physiology.

        My point is that intelligence as I define it involves adaptation for problem solving on the part of a complex system in a complex environment. The speed isn’t really relevant, although it’s obviously an important factor in artificial intelligence, which has practical and economic incentives.

        So I again return to my question of whether we consider a dog or a dolphin to be “intelligent,” or whether only humans are intelligent. If it’s the latter, then we need to be much more specific than I’ve been in my definition.

        • Fedizen@lemmy.world
          link
          fedilink
          arrow-up
          2
          arrow-down
          2
          ·
          10 months ago

          What I’m saying is current computer “AI” isn’t on the spectrum of intelligence while a dog or grasshopper is.

          • PrinceWith999Enemies@lemmy.world
            link
            fedilink
            arrow-up
            3
            arrow-down
            2
            ·
            10 months ago

            Got it. As someone who has developed computational models of complex biological systems, I’d like to know specifically what you believe the differences to be.

            • Fedizen@lemmy.world
              link
              fedilink
              arrow-up
              2
              arrow-down
              1
              ·
              10 months ago

              It’s the ‘why’. A robot will only teach itself to walk because a human predefined that outcome. A human learning to walk is maybe not even intelligence - Motor functions even operate in a separate area of the brain from executive function and I’d argue the defining tasks to accomplish and weighing risks is the intelligent part. Humans do all of that for the robot.

              Everything we call “AI” now should be called “EI” or “extended intelligence” because humans are defining the both the goals and the resources in play to achieve them. Intelligence requires a degree of autonomy.

              • PrinceWith999Enemies@lemmy.world
                link
                fedilink
                arrow-up
                2
                arrow-down
                2
                ·
                10 months ago

                Okay, I think I understand where we disagree. There isn’t a “why” either in biology or in the types of AI I’m talking about. In a more removed sense, a CS team at MIT said “I want this robot to walk. Let’s try letting it learn by sensor feedback” whereas in the biological case we have systems that say “Everyone who can’t walk will die, so use sensor feedback.”

                But going further - do you think a gazelle isn’t weighing risks while grazing? Do you think the complex behaviors of an ant colony isn’t weighing risks when deciding to migrate or to send off additional colonies? They’re indistinguishable mathematically - it’s just that one is learning evolutionarily and the other, at least theoretically, is able to learn theoretically.

                Is the goal of reproductive survival not externally imposed? I can’t think of any example of something more externally imposed, in all honesty. I as a computer scientist might want to write a chatbot that can carry on a conversation, but I, as a human, also need to learn how to carry on a conversation. Can we honestly say that the latter is self-directed when all of society is dictating how and why it needs to occur?

                Things like risk assessment are already well mathematically characterized. The adaptive processes we write to learn and adapt to these environmental factors are directly analogous to what’s happening in neurons and genes. I’m really just not seeing the distinction.

      • 0ops@lemm.ee
        link
        fedilink
        arrow-up
        1
        ·
        10 months ago

        I personally wouldn’t consider a neutral network an algorithm, as chance is a huge factor: whether you’re training or evaluating you’ll never get quite the same results