Is there anyway to make it use less at it gets more advanced or will there be huge power plants just dedicated to AI all over the world soon?
Is there anyway to make it use less at it gets more advanced or will there be huge power plants just dedicated to AI all over the world soon?
I don’t get it, how is it possible that so many people all over the world use this concurrently, doing all kinds of lengthy chats, problem solving, codegeneration, image generation and so on?
that’s why they need huge datacenters and thousands of GPUs. And, pretty soon, dedicated power plants. It is insane just how wasteful this all is.
So do they load all those matrices (totalling to 175b params in this case) to available GPUs for every token of every user?
yep. you could of course swap weights in and out, but that would slow things down to a crawl. So they get lots of vram (edit: for example, an H100 has 80gb of vram)