• 1 Post
  • 47 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle

  • University is ok if you’re starting at zero and don’t even know what’s out there. It’s for exposing students to a a breadth of topics and some rationale of why things are as they are, but not necessarily for plugging them into a production environment.

    Nothing beats having your own real world project, either for motivation or exposure to cutting edge methods. Universities have tried to replicate that with things like ‘problem based learning,’ and they probably hope that students will be inspired by one or two of the classes to start their own out-of-class project, but school and work are fundamentally different ways of learning with fundamentally different goals.





  • Yeah, I think it really depends on use case. Like, I’m trying to imagine what aspect of my home lab could go so wrong, while I’m out of the house, that it would need fixed right away, and there’s nothing. I only leave my house for work or maybe a week of vacation, though, and I can imagine someone who’s occasionally away from home/house for 6-month deployments, or has a vacation home they only visit four weekends a year, might want more extensive remote maintenance. I’d still want to do that via ssh or vpn, but that’s me.


  • Definitely agree for a single install. If OP has a bunch of these installs to do, then editing an install USB to configure networking and enable sshd might be worth the effort. Do the install over ssh and hope the machine starts up as desired, but even then, if it doesn’t just magically appear on the network, he’s going to need a monitor to see where the startup failed.

    Raspberry Pi’s disk imager will let you pre-configure networking, accounts, and ssh, so you just write the image to an SD card, plug it in, and go. That’s a great solutions for systems usually meant to be headless and removable media. If OP’s client hardware allows, he could plug in the M2 or SATA drive meant to be the server’s startup, install Deb there, and. transfer to the server hardware. That’s definitely more work that just swapping the keyboard & monitor, but it accomplishes OP’s stated goal. (Otherwise, a lot of this thread follows the linux meme of “How do I [X]?” “[X] is dumb, do [Y] instead.”)


  • With 25 GbE, even 10, I’d be tempted to PXE boot client systems. Maybe still have a local PCIe SSD for windows game files.

    Dunno how that would actually work with Windows, but it was fun when I did it for beowulf nodes. Setting RPis to netboot is a little involved, but you can create an OSMC image and give all your TVs a consistent ‘smart’ interface. You don’t even need 10GbE to be pretty functional for the Pi, but my experience is that WiFi is not fast enough.



  • I don’t get this counter-argument. Is TFA actually suggesting that the average grandma quit using Yahoo mail or Facebook and set up her own email server and mastodon instance? The only people even considering self-hosting are people with technology interest and reasonable passion. It’s an article written for a niche techie website, and we’re discussing it on a forum for self-hosting nerds.

    The counter-argument is like saying the average layman should stick to televised football, because they don’t have the physical savvy or aptitude for the game, and most people aren’t gonna put in the time or effort to build their strength & endurance to compete. It may be an accurate statement, but the people you’re addressing (grandma) weren’t TFA’s target audience and weren’t even going to try in the first place, and you discourage people who might really enjoy giving the hobby a try.


  • Depends on how you calculate costs. Like, I have Kodi running on a RPi for home entertainment/theater. There’s no way to outsource that, but the RPi is idle most of the time. Adding services to it is effectively or marginally free, except for my time, and there’s still a significant time cost to get paid, off-site cloud services set up.

    But charging for your own time is kind of disingenuous. You don’t include your time in the cost of eating (a Big Mac worth $60??), watching a video, or going on vacation. The only people self-hosting have a personal, hobby/entertainment interest in it, and I think it’s more accurate to compare the costs of self hosting with the costs of other forms of entertainment. Do you get more fun-value out of the costs of self hosting or out of a theater ticket?



  • HA doesn’t require 4/4/32, that’s just the hardware the HA people sell. (which, given that your phone may be 8/16/128, is hardly “robust”). Generally, the Home Assistant crowd kind of target an audience that’s probably already running some kind of home server, NAS, or router, and HA can probably be installed on that device.

    Theoretically, there’s no reason the HA server couldn’t be installed on your phone, except then your smart home functions would only work while your phone is in the house and not sleeping. Kind of defeats the point of a lot of it, unless you’re just thinking of smart home like “remote control for everything.” Regardless, much smaller niche for an already-small market, and apparently not a priority for the dev team.






  • Ditto. Started 20 years ago with one service I wanted. Complicated it a little more every time some new use case or interesting trinket came up, and now it’s the most complicated network in the neighborhood. Weekend projects once a year add up.

    If you have the resources, experiment with new services on a completely different server than everything else. The testing-production model exists for a reason: backups are good, but restoring everything is a pain in the ass.

    I also like to keep a text editor open and paste everything I’m doing, as I do it, into that window. Clean it up a little, and you’ve got documentation for when you eventually have to change/fix it.


  • I used to have this with homeassistant and zwavejs. Every time I’d pull a new homeassistant, the zwave integration would fail, because it required a newer version of zwavejs. Taught me to build the chain of services into one docker-compose, so they’d all update together. That’s become one of the rationales for me to use docker: got a chain of dependent processes? wrap them in a docker so you’re working with (probably) the same dependencies as the devs.

    My other rationale is just portability, and docker is just one of many solutions there. In my little home environment, where servers are either retired desktops or gee-that-seems-cool SBCs, it’s nice to be able to easily move stuff independent of architecture or OS.