• 1 Post
  • 52 Comments
Joined 2 years ago
cake
Cake day: June 9th, 2023

help-circle
  • You seem to be obsessed with optimising one resource at the expense of others.

    If you want to push it and paint me as obsessed about something, then let it be this: providing this community with on-topic and reasonable advice

    you’re only going to save a few MB of RAM.

    This is false, and you should read once again my previous message illustrating why: on a decent “self-host”-friendly machine, the same software may work very well, or not at all, depending on whether the user would engage with very basic configuration. This goes beyond RAM (memory isn’t the sole shared resource), and I’m adamant that the alternative (which was “pretending that the problem doesn’t exist” turned into “throwing money at the problem”) is unreasonable.

    On the other hand, if you’re doing it to learn more about computers then it might be worthwhile. This is a community of hobbiests, after all…

    Or more importantly: the extent to which you can self-host out of sheer luck and ignorance like you suggest is very limited. If you don’t want to engage with a minimum amount of configuration, you might bump into security issues (a much broader and complex subject) long before any of the above has a material impact.


  • I’m saying this based on real world experience

    And do you think I would spend my time engaging if that wasn’t from my own very “real world experience” of lessons learned the hard way?

    Bringing-up “diminishing returns” as if this was an optimisation game also doesn’t do this justice. Take the typical “household FOSS package” with software names often brought up in here: a nextcloud instance, a photo-sharing service like immich, private instant messaging, a software forge, a subsonic-compatible audio/video streaming server, a couple php websites like wallabag and RSS aggregators.

    An Intel Atom CPU and 4GB of RAM is plenty sufficient for all that, and will cost you single digit USD a month, granted you put the (one-time) effort to tune and balance those services. Would you run all the above from upstream’s docker files, I can guarantee you that you would deem this (perfectly fine otherwise) server underpowered for the task at hand (and would probably go for a 10th gen or so Intel Core CPU, quadruple the RAM and 3-6× the energy cost in the process).

    And that’s the point I’m making here: a self-hosting community of tinkerers should (ideally) know better, for the ethics’ sake of keeping the process environmentally friendly, and not wasting other people’s money.


  • Do you have the data to back that up?

    I mean, you are the one making the exceptional claim that unnecessarily running multiple instances of programs on a device with finite resources has no practical adverse effect. Of course, the effects can be more or less drastic depending on the many variables at play (hardware, software, memory pressure, thread starvation, cache misses, …) and can indeed be negligible in some lucky circumstances. The point is that you don’t call that shot, and especially not by burying your head in the sand and pretending it’s never gonna be a problem.

    Effective use of computing resources requires tuning. Introduction of a new service creates imbalance. Ensuring that the server performs nominally and predictably for all intended services is a balancing act and a sysadmin’s job. Services whose deployment settings are set by someone with no prior knowledge of the deployment constraints can’t be trusted to do a good job at it (that’s the nature of the physical world we live in, not my opinion), and promoting this attitude promote the kind of wasteful and irresponsible computing I was on about.

    Now, I’ll give you the link to this basic helper for tuning a PostgreSQL server: https://pgtune.leopard.in.ua/
    Will you tell me what are the correct inputs for my homelab (I won’t tell you the hardware, the set-up, the other services running on it, the state of the system, etc)?
    And later, when you will distribute your successful container to millions of users, what will you respond to the angry ones that will complain that your software is slow, to no fault of your coding, because they happen to pile up multiple DBs, web servers, application servers, reverse proxies, … on their banana SoCs?





  • As someone who’s been using ttrss for decades but would be open to trying something new, what would you say is FreshRSS’ killer feature (and missing killer feature) compared to ttrss?

    (Not trying to start a flame war, ttrss feels like a finished project, which is not a bad thing, but I think it’s healthy to wish for more innovation in this space)





  • I’d argue XMPP is less ideal than Matrix because groups are located on a single server, which makes them easier to take down than Matrix’ replicated state.

    That is true, but it’s never been a problem in my relatively long experience with XMPP: some server software can be used as a cluster and distributed, making it highly available (basically, the whole of WhatsApp runs on a fork of ejabberd), and the comparatively tiny resource usage of XMPP contributes to its stability.

    XMPP does have a spec for F-MUC (distributed rooms somewhat like Matrix, many years before Matrix) and my rationale as to why it never picked up despite a whole decade of “competition” from Matrix is that it’s a problem that just doesn’t need solving. The price to pay for it is hefty: Matrix resource usage (bandwidth, CPU, RAM) is insane, its protocol complexity makes it a single-vendor implementation (which is risky on very practical grounds), and it’s not even bulletproof for the niche use-case it set to tackle: in the end, your identity server on Matrix remains centralized.

    You can tell that I’m partial to XMPP, but that’s only after having been a service operator for years, with my original expectations largely favouring Matrix.


  • I think you should give Trilium(Next) Notes a try:

    • it has the hierarchical notes structure that you are familiar with in obsidian

    • it has better ways of keeping things organized (attributes can be values or references, can be shared and inherited, which provides a flexible framework for having notes “types” as templates that can be extended, e.g. people vs. colleagues, businesses vs. companies, etc)

    • it has the concept of note hoisting (which lets you focus on a note and its sub-notes, so other projects/spaces don’t come in the way of autocomplete and placing references), and workspaces that builds further on top of that

    • it can be used standalone (local client/offline-only, like obsidian) but coupling it with a remote-server opens more interesting use-cases (synching, sharing notes with others by public URLs, one-user/multi-client editing) which gives the best of both worlds (local-first/online-first) and lets you access your personal notes on devices you don’t necessarily own (which obsidian doesn’t). The mobile app story isn’t great (it’s a PWA with limited offline capabilities at the moment), but isn’t worse than the alternatives either (I can’t really work and think long form on a handheld, no matter the editor experience, but perhaps that’s just me).