Guide to Self Hosting LLMs with Ollama.
- Download and run Ollama
- Open a terminal, type
ollama run llama3.2
Guide to Self Hosting LLMs with Ollama.
ollama run llama3.2
If it’s an M1, you def can and it will work great. With Ollama.
Shoutout to Magic Earth, the (weirdly named) iOS app that uses OpenStreeMap data. Works on CarPlay, has reliable routing, and I get a buzz out of updating a changed a speed limit or something on OSM and then seeing the change implemented a few weeks later when I’m driving through there again.
My step-up from Pi was to ebay HP 800 G1 minis then G2’s. They are really well made, there’s full repair manuals available, and they are just a pleasure to swap bits in and out. I’ve heard good things about, and expect similar build quality from the 1 liter Lenovos.
I agree that RAM is a likely constraint rather than processor for self-hosting workloads. Particularly in my case as I’m on Proxmox and run all my docker containers in separate LXCs. I run 32GB in the G2’s which was a straightforward upgrade (they take laptop like memory). One some of them I’ve upgraded the SSDs, or if not, I’ve added M.2 NVME drives (that the G2’s have a slot for).
Wish by Peter Goldsworthy. J.J. has always been more at home in Sign language than in spoken English. Recently divorced, he returns to school to teach Sign. His pupils include the foster parents of a beautiful and highly intelligent ape named Eliza.
Greta Tintin Thunberg
I run two local physical servers, one production and one dev (and a third prod2 kept in case of a prod1 failure), and two remote production/backup servers all running Proxmox, and two VPSs. Most apps are dockerised inside LXC containers (on Proxmox) or just docker on Ubuntu (VPSs). Each of the three locations runs a Synology NAS in addition to the server.
Backups run automatically, and I manually run apt updates on everything each weekend with a single ansible playbook. Every host runs a little golang program that exposes the memory and disk use percent as a JSON endpoint, and I use two instances of Uptime Kuma (one local, and one on fly.io) to monitor all of those with keywords.
So -
I’m on board with original punctuation going inside the quote, but then to be consistent, capitalization has to as well. So instead of “This comment…” it should be “this comment…” since in the original quote that was just a clause separated by a comma, not its own sentence.
My ‘good reason’ is just that it’s super convenient - for backups and painlessly moving apps around between nodes with all their data.
I would run plain LXCs if people nicely packaged up their web apps as LXC templates and made them available on LXCHub for me to run with lxc compose up
, but they generally don’t.
I guess another alternate future would be if Proxmox added docker container supervision to their web interface, but you’re still not going to have the self-contained neat snapshot system that includes the data.
In theory you should be able to convert an OCI container layer by layer into an LXC, so I bet there’s projects out there that attempt this.
No answer, but just to say I run most of my services with this setup - Docker in a Debian LXC under Proxmox, and don’t have this issue. The containers are ‘privileged’, and I have ‘nesting’ ticked on, but apart from that all defaults.
There are a heap of general “Linux Administration” courses which will patch a lot of holes in the knowledge of almost all self-taught self hosters. I’d been using Linux for a while but didn’t know you could tab to complete file names in commands till I learned it on Udemy ¯_(ツ)_/¯
#2 back and sides, finger length on top
The two extremes:
I routinely run my homelab services as a single Docker inside an LXC - they are quicker, and it makes backups and moving them around trivial. However, while you’re learning, a VM (with something conventional like Debian or Ubuntu) is probably advised - it’s a more common experience so you’ll get more helpful advice when you ask a question like this.
how to access the NAS and HA separately from the outside knowing that my access provider does not offer a static IP and that access to each VM must be differentiated from Proxmox.
Tailscale, it will take about 5 minutes to set up and cost nothing.
Yes, in a shallow tourist mine in Australia. Apparently coal starts to flake easily once it’s been exposed to air for a bit, so they kept a big chunk in a large jar of water that you could take out and handle. It felt like a light wet rock.
The sample, and the coal at the workface of the mine was stereotypicaly black. We wore hats with lights on, and when we emerged back out to the daylight I had an overwhelming urge to speak in a Monty Python type Yorkshire accent and go home and have my back scrubbed clean of the coal dust by my swarthy tired looking wife while I sat in a tub in front of the fire in the kitchen and our urchins played in the street.
I don’t want to give the impression I’m a big fossil fuel tourist, but I’ve also seen blobs of crude oil on beaches near Mediterranean sea oil terminals.
Sadly, I didn’t try to set fire to them on either of these occasions, which I now regret.
I’m also on Silverbullet, and from OP’s description it sounds like it could be a good fit. I don’t use any of the fancy template stuff - just a bunch of md files in a directory with links between them.
Your workload (a NAS and a handful of services) is going to be a very familiar one to members of the community, so you should get some great answers.
My (I guess slightly wacky) solution for this sort of workload has ended up being a single Docker container inside an LXC container for each service on Proxmox. Docker for ease of management with compose and separate LXCs for each service for ease of snapshots/backups.
Obviously there’s some overhead, but it doesn’t seem to be significant.
On the subject of clustering, I actually purchased three machines to do this, but have ended up abandoning that idea - I can move a service (or restore it from a snapshot to a different machine) in a couple of minutes which provides all the redundancy I need for a home service. Now I keep the three machines as a production server, a backup (that I swap over to for a week or so every month or two) and a development machine. The NAS is separate to these.
I love Proxmox, but most times it get mentioned here people pop up to boost Incus/LXD so that’s something I’d like to investigate, but my skills (and Ansible playbooks) are currently built around Proxmox so I’ve got a bit on inertia.
Is that a mini? I love those little 1L HP’s. I run 3 G2 800’s. These are very nicely built and therefore a joy to work on, and sip power when idling. Highly recommend. Also +1 for Proxmox.
Two good points here OP. Type
docker image ls
to see all the images you currently have locally - you’ll possibly be surprised how many. All the ones tagged<none>
are old versions.If you’re already using github, it includes an package repository you could push retagged images to, or for more self-hosty, a local instance of Forgejo would be a good option.