The number of containers I’m running on my server keeps increasing, and I want to make sure I’m not pushing it beyond its capabilities. I would like a simple interface accessible on my home network (that does not make any fishy connections out) that shows me CPU and RAM-usage, storage status of my hard drives, and network usage. It should be FOSS, and I want to run it as a Docker container.
Is Grafana the way to go, or are there other options I should consider?
I won’t waste time on things other than grafana if your setup is serious. Because you will always want more. Log aggregation, log query, alerts, tracing, profiling, oidc, s3 bucket, more and more dashboards. It’s addictive. Why waste time to redo it in the future?
Possibly a bit overkill, but I’m running Zabbix in 3 containers (Core, WebUI, database). Using its agent installed on all my machines, I can monitor basically anything. Of course, you can set limits, alerts, draw graphs, etc.
Cockpit is a simpler choice for that.
Glances could be a good option, it’s pretty bare bones, but it covers the basics and serves my needs well enough
Thanks for the recommendation, I’ll check that out in more detail!
The last time I’ve used glances - to be fair, some years ago - it caused the main CPU usage on my Raspberry Pi 3. However, looks like it’s been fixed recently.
it’s a bit more complicated than that. grafana is only for displaying of the collected info. you still need a database, and something that collects data from systems.
what I do is grafana + prometheus for storage + prometheusnode exporter for collection.
but, I’m not totally satisfied with this setup, because long term storage is unsolved (cranking up the retention time in prom will maje sure it’ll cost a lot of storage after a few months), and I haven’t found a way to collect info about top users of resources (e.g. top 10 processes by cpu usage)Ah, I see. What kind of disk usage are we talking about over e.g. one month? I am (at least for now) not necessarily interested in long term storage (but the data hoarder in me might quickly change that).
btw grafana does make connections out, at least for when installing plugins, possibly more.
if you are not in the EU, they even load fucking fecesbook scripts on their main website! a few months ago that was happening in the EU too. if you’re in the EU, you can see it for yourself with thea VPN or the Tor browser, request a new circuit until the bottom one is USA or something like that, and check the network traffic with the devtools (reload the site if you don’t see it there)
even if this is not the case in the EU (for now), there are no excuses for doing this. no, letting your website be handled solely by marketing heads is not an excuse.
For installing plugins, I am fine with it, but would not want any telemetry being sent somewhere without my knowledge. The data collected should stay on my server.
Munin feels a little old and crusty, but just works. Over 20 years old now.
Netdata is far simpler to set up than Grafana from what I remember but it does phone home by default (you can disable it in via options in docker or something). On one of my servers it doesn’t show container names which is kinda a bummer but I didn’t care enough to troubleshoot that, since I mostly ssh in and use btop anyway …
Seconded. Netdata has a generic and forgettable name but is powerful and easy to set up.
Open source, runs in docker or LXC. Web UI with more metrics than you will ever want, plus plugins. Support for alerts and some log aggregation, though I have not tried logging yet.
Webmin is worth giving a try depending on your needs, it’s pretty lightweight compared to some others:
I would personally use grafana, but zabbix is also a good choice.