I’ve been looking into self-hosting LLMs or stable diffusion models using something like LocalAI and / or Ollama and LibreChat.

Some questions to get a nice discussion going:

  • Any of you have experience with this?
  • What are your motivations?
  • What are you using in terms of hardware?
  • Considerations regarding energy efficiency and associated costs?
  • What about renting a GPU? Privacy implications?
  • rufus@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    6 months ago

    Quite some AI questions coming up in selfhosted in the last few days…

    Here’s some more communities I’m subscribed to:

    And a few inactive ones on lemmy.intai.tech

    I’m using koboldcpp and ollama. KoboldCpp is really awesome. In terms of hardware it’s an old PC with lots of RAM but no graphics card, so it’s quite slow for me. I occasionally rent a cloud GPU instance on runpod.io Not doing anything fancy, mainly role play, recreational stuff and I occasionally ask it to give me creative ideas for something, translate something or re-word or draft an unimportant text / email.

    Have tried coding, summarizing and other stuff, but the performance of current AI isn’t enough for my everyday tasks.

    • Unforeseen@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      ·
      6 months ago

      Thanks for the post, super appreciate the posting of other communties. I think this is a great way to grow Lemmy and create discoverability for niche communities, I’ll keep that in mind myself on future opportunities.