Not enough info. Those are two different things.
Neat
MajorMUD is the only one off the top of my head.
I’d rather go back to the 90s! The good old times, when devs can lose all that commercial software source code they are developing when the hard drive crashes! And there were no backups! Sorry people who bought licenses! 😂
Narrator: this happened more than once.
Another example of this is what happened with KeePass, then KeePassX, which gave us KeePassXC. Went from single Dev to single Dev to group of devs that were serious about the ecosystem.
As a person who used to be “the backup guy” at a company, truer words are rarely spoken. Always test the backups otherwise it’s an exercise in futility.
One of my next steps was hardening my OPNSense router as it handles all the edge network reverse proxy duties, so IDS was in the list. I’m digging into Crowdsec now, it looks like there’s an implementation for OPNsense. Thanks for the tip!
Good call. I do some backups now but I should formalize that process. Any recommendations on selfhost packages that can handle the append only functionality?
I wonder what performance impact there would be if you were to move pgsql onto bare metal with enough ram dedicated to caching all of the db data (think: i5 or i7 nuc). That’s going to be my next step with my homelab; I want to migrate everything to a single db host with a lot of RAM and M2 storage and avoid the db process replication I have going on. I have no performance complaints with NC currently, I’m running PHP cache and redis as well as image preview and imaginary.
-h for help should list commands, and it’s nested so you can get help for each subcommand. You’ll want to read the Getting Started section.
I’m using the Whipper docker container mostly successfully.
Using NVEnc with the current linuxserver images. The readme covers the issue.
Awesome, I’ll check it out later this evening. Thank you!
I assume tdarr will take a handoff/trigger from Radarr to operate on a file?
I agree, writing meaningless tests helps nobody and just creates extra work everyone. Unit tests should prove functionality and integration tests act as a vise. Much like you said, if a test breaks in that scenario, then you know something in another class has violated that contract. Good tests will have meaningful names and prove functionality, especially in the backend where it is especially important…
You mention (what I would consider) a bad practice of allowing merges without review. While that should be possible on personal projects with only one dev, strict review guidelines should exist so that nobody can just “push to prod”. CICD is your friend - use it so that staging and prod never break. Again, I’m used to working on systems used by scores of millions of users so I appreciate forced automated validation. Nobody likes dumb breaks on a Friday before vacation.
That does sound like a nightmare. I’m assuming you mean failed test when you say “red boy”, and that made me wonder about PR practices. I’m used to a very strict review environment and fairly quick review turnaround or requests to go over the code. I’ve heard horror stories about people not getting PRs reviewed for days or weeks or some people just plain refusing to review code. I work on microservices that are all usually less than 10,000 lines though, not something with over a million lines of legacy code.
For conversion of videos after download
deleted by creator