

I mean, a mechanical timer costs, like, 3 bucks in any currency and lets you set charge and discharge cycles. Add 10 bucks and you have one that you can pilot via REST API.


I mean, a mechanical timer costs, like, 3 bucks in any currency and lets you set charge and discharge cycles. Add 10 bucks and you have one that you can pilot via REST API.


Don’t rip out the battery, that’s free UPS!


You seem to be obsessed with optimising one resource at the expense of others.
If you want to push it and paint me as obsessed about something, then let it be this: providing this community with on-topic and reasonable advice
you’re only going to save a few MB of RAM.
This is false, and you should read once again my previous message illustrating why: on a decent “self-host”-friendly machine, the same software may work very well, or not at all, depending on whether the user would engage with very basic configuration. This goes beyond RAM (memory isn’t the sole shared resource), and I’m adamant that the alternative (which was “pretending that the problem doesn’t exist” turned into “throwing money at the problem”) is unreasonable.
On the other hand, if you’re doing it to learn more about computers then it might be worthwhile. This is a community of hobbiests, after all…
Or more importantly: the extent to which you can self-host out of sheer luck and ignorance like you suggest is very limited. If you don’t want to engage with a minimum amount of configuration, you might bump into security issues (a much broader and complex subject) long before any of the above has a material impact.


I’m saying this based on real world experience
And do you think I would spend my time engaging if that wasn’t from my own very “real world experience” of lessons learned the hard way?
Bringing-up “diminishing returns” as if this was an optimisation game also doesn’t do this justice. Take the typical “household FOSS package” with software names often brought up in here: a nextcloud instance, a photo-sharing service like immich, private instant messaging, a software forge, a subsonic-compatible audio/video streaming server, a couple php websites like wallabag and RSS aggregators.
An Intel Atom CPU and 4GB of RAM is plenty sufficient for all that, and will cost you single digit USD a month, granted you put the (one-time) effort to tune and balance those services. Would you run all the above from upstream’s docker files, I can guarantee you that you would deem this (perfectly fine otherwise) server underpowered for the task at hand (and would probably go for a 10th gen or so Intel Core CPU, quadruple the RAM and 3-6× the energy cost in the process).
And that’s the point I’m making here: a self-hosting community of tinkerers should (ideally) know better, for the ethics’ sake of keeping the process environmentally friendly, and not wasting other people’s money.


Do you have the data to back that up?
I mean, you are the one making the exceptional claim that unnecessarily running multiple instances of programs on a device with finite resources has no practical adverse effect. Of course, the effects can be more or less drastic depending on the many variables at play (hardware, software, memory pressure, thread starvation, cache misses, …) and can indeed be negligible in some lucky circumstances. The point is that you don’t call that shot, and especially not by burying your head in the sand and pretending it’s never gonna be a problem.
Effective use of computing resources requires tuning. Introduction of a new service creates imbalance. Ensuring that the server performs nominally and predictably for all intended services is a balancing act and a sysadmin’s job. Services whose deployment settings are set by someone with no prior knowledge of the deployment constraints can’t be trusted to do a good job at it (that’s the nature of the physical world we live in, not my opinion), and promoting this attitude promote the kind of wasteful and irresponsible computing I was on about.
Now, I’ll give you the link to this basic helper for tuning a PostgreSQL server: https://pgtune.leopard.in.ua/
Will you tell me what are the correct inputs for my homelab (I won’t tell you the hardware, the set-up, the other services running on it, the state of the system, etc)?
And later, when you will distribute your successful container to millions of users, what will you respond to the angry ones that will complain that your software is slow, to no fault of your coding, because they happen to pile up multiple DBs, web servers, application servers, reverse proxies, … on their banana SoCs?


I disagree. You are just entertaining the idea that servers must always and forever be oversized, that’s the definition of wasteful (and environmentally irresponsible). Unless you are firing-up and throwing-away services constantly, nothing justifies this and sparing the relatively low effort it is to deploy your infrastructure knowingly.


Precisely what pre-devops sysadmins were saying when containers were becoming trendy. You are just pushing the complexity elsewhere, and creating novel classes of problems for yourself (keeping your BoM in control and minimal is one of many others that got thrown away)


Self hosting doesn’t mean “being wasteful and letting containers duplicate services”. I want to know which DB application X is using, so I pool it for applications Y and Z.
As someone who’s been using ttrss for decades but would be open to trying something new, what would you say is FreshRSS’ killer feature (and missing killer feature) compared to ttrss?
(Not trying to start a flame war, ttrss feels like a finished project, which is not a bad thing, but I think it’s healthy to wish for more innovation in this space)


One of them, for a 50% premium and worse finishing/less robust


Not in the way of ThinkPads being one of the last bastions of laptop durability and upgradeability, though


Telegram never was private, group chats never were encrypted (and that’s not an opinion: the feature simply is missing). If anything, they are just removing their false and deceiving claims. That they remained there for so long is something I can’t wrap my head around.


I’d argue XMPP is less ideal than Matrix because groups are located on a single server, which makes them easier to take down than Matrix’ replicated state.
That is true, but it’s never been a problem in my relatively long experience with XMPP: some server software can be used as a cluster and distributed, making it highly available (basically, the whole of WhatsApp runs on a fork of ejabberd), and the comparatively tiny resource usage of XMPP contributes to its stability.
XMPP does have a spec for F-MUC (distributed rooms somewhat like Matrix, many years before Matrix) and my rationale as to why it never picked up despite a whole decade of “competition” from Matrix is that it’s a problem that just doesn’t need solving. The price to pay for it is hefty: Matrix resource usage (bandwidth, CPU, RAM) is insane, its protocol complexity makes it a single-vendor implementation (which is risky on very practical grounds), and it’s not even bulletproof for the niche use-case it set to tackle: in the end, your identity server on Matrix remains centralized.
You can tell that I’m partial to XMPP, but that’s only after having been a service operator for years, with my original expectations largely favouring Matrix.


I think you should give Trilium(Next) Notes a try:
it has the hierarchical notes structure that you are familiar with in obsidian
it has better ways of keeping things organized (attributes can be values or references, can be shared and inherited, which provides a flexible framework for having notes “types” as templates that can be extended, e.g. people vs. colleagues, businesses vs. companies, etc)
it has the concept of note hoisting (which lets you focus on a note and its sub-notes, so other projects/spaces don’t come in the way of autocomplete and placing references), and workspaces that builds further on top of that
it can be used standalone (local client/offline-only, like obsidian) but coupling it with a remote-server opens more interesting use-cases (synching, sharing notes with others by public URLs, one-user/multi-client editing) which gives the best of both worlds (local-first/online-first) and lets you access your personal notes on devices you don’t necessarily own (which obsidian doesn’t). The mobile app story isn’t great (it’s a PWA with limited offline capabilities at the moment), but isn’t worse than the alternatives either (I can’t really work and think long form on a handheld, no matter the editor experience, but perhaps that’s just me).


You need to list out your requirements. What do you want to do? Where do you need your data? Do you care about open source? Self-hosting? Do you have an idea how your content will be organized? Will you ever need to tap into it as data? Etc


Have you tried trilium notes? Not as hyped and polished, but does extraordinarily well IME.


I didn’t like obsidian’s lacking in attributes structuring/typing and the fact that it cannot serve over a web UI (for wherever you cannot install the heavy client or just to share notes via URL), and found trilium notes to be doing that perfectly, and much much more. Highly recommend.


You can host (tens? of) thousands of XMPP sessions on a RPi at the back of your router or in a field hooked to a PV panel and sim card, and none of “the wealthy” knowing or caring about it, though. The difference with signal is that everyone can do that, and everyone doing it expands the network and makes it more resilient for the benefits of all.


How it works (to simplify) is them giving up on matrix clients ever becoming performant and well behaving on handheld devices (because of the absurd complexity of the protocol), and, instead of doing something about that, just decided to shift the client logic onto the server and castrating the clients (esp. for offline features). It’s also good short-term business because it makes hosting Matrix even more cumbersome and expensive, giving a compelling reason for the type of midscale/corporate deployments previously on the fence about their self-hosting costs (due to poor design and scalability) to just pay Element for that (while probably contemplating an alternative future).
Prosody is a great piece of software, and so is ejabberd which offers some perks. I can’t speak for the other servers (mongooseim, openfire, tigase, …) which I haven’t tried in a long time,
All that’s to say that it’s amazing that we get so many well maintained and compatible servers (and clients) implementations in XMPP-land, and all the implications for its healthy future.