Is there anyway to make it use less at it gets more advanced or will there be huge power plants just dedicated to AI all over the world soon?

  • hisao@ani.social
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    So do they load all those matrices (totalling to 175b params in this case) to available GPUs for every token of every user?

    • vrighter@discuss.tchncs.de
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      2 months ago

      yep. you could of course swap weights in and out, but that would slow things down to a crawl. So they get lots of vram (edit: for example, an H100 has 80gb of vram)