Is there anyway to make it use less at it gets more advanced or will there be huge power plants just dedicated to AI all over the world soon?

  • vrighter@discuss.tchncs.de
    link
    fedilink
    arrow-up
    7
    arrow-down
    2
    ·
    2 months ago

    imagine that to type one letter, you need to manually read all unicode code points several thousand times. When you’re done, you select one letter to type.

    Then you start rereading all unicode code points again for thousands of times again, for the next letter.

    That’s how llms work. When they say 175 billion parameters, it means at least that many calculations per token it generates

    • hisao@ani.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      That’s how llms work. When they say 175 billion parameters, it means at least that many calculations per token it generates

      I don’t get it, how is it possible that so many people all over the world use this concurrently, doing all kinds of lengthy chats, problem solving, codegeneration, image generation and so on?

      • vrighter@discuss.tchncs.de
        link
        fedilink
        arrow-up
        2
        ·
        2 months ago

        that’s why they need huge datacenters and thousands of GPUs. And, pretty soon, dedicated power plants. It is insane just how wasteful this all is.

        • hisao@ani.social
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          So do they load all those matrices (totalling to 175b params in this case) to available GPUs for every token of every user?

          • vrighter@discuss.tchncs.de
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            2 months ago

            yep. you could of course swap weights in and out, but that would slow things down to a crawl. So they get lots of vram (edit: for example, an H100 has 80gb of vram)