• 0 Posts
  • 14 Comments
Joined 3 months ago
cake
Cake day: November 5th, 2024

help-circle
  • The misunderstanding seems to be between software and hardware. It is good to reboot Windows and some other operating systems because they accumulate errors and quirks. It is not good to powercycle your hardware, though. It increases wear.

    I’m not on an OS that needs to be rebooted, I count my uptime in months.

    I don’t want you to pick up a new anxiety about rebooting your PC, though. Components are built to last, generally speaking. Even if you powercycled your PC 5 times daily you’d most likely upgrade your hardware long before it wears out.



  • To me, the appeal is that my workflow depends less on my computer and more on my ability to connect to a server that handles everything for me. Workstation, laptop or phone? Doesn’t matter, just connect to the right IPs and get working. Linux is, of course, the holy grail of interoperability, and I’m all Linux. With a little bit of set up, I can make a lot of things talk to each other seamlessly. SMB on Windows is a nightmare but on Linux if I set up SSH keys then I can just open a file manager and type sftp://<hostname> and now I’m browsing that machine as if it was a local folder. I can do a lot of work from my genuinely-trash laptop because it’s the server that’s doing the heavy lifting

    TL;DR -

    My workflow becomes “client agnostic” and I value that a lot



  • I recommend it over a full disk backup because I can automate it. I can’t automate full disk backups as I can’t run dd reliably from a system that is itself already running.

    It’s mostly just to ensure that I have config files and other stuff I’ve spent years building be available in the case of a total collapse so I don’t have to rebuilt from scratch. In the case of containers, those have snapshots. Anytime I’m working on one, I drop a snapshot first so I can revert if it breaks. That’s essentially a full disk backup but it’s exclusive to containers.

    edit: if your goal is to minimize downtime in case of disk failure, you could just use RAID


  • My method requires that the drives be plugged in at all times, but it’s completely automatic.

    I use rsync from a central ‘backups’ container that pulls folders from other containers and machines. These are organized in

    /BACKUPS/(machine/container)_hostname/...

    The /BACKUPS/ folder is then pushed to an offsite container I have sitting at a friends place across town.

    For example, I backup my home folder on my desktop which looks like this on the backup container

    /BACKUPS/Machine_Apollo/home/dork/

    This setup is not impervious to bitflips a far as I’m aware (it has never happened). If a bit flip happens upstream, it will be pushed to backups and become irrecoverable.