I recently replaced an ancient laptop with a slightly less ancient one.
- host for backups for three other machines
- serve files I don’t necessarily need on the new machine
- relatively lightweight - “server” is ~15 years old
- relatively simple - I’d rather not manage a dozen docker containers.
- internal-facing
- does NOT need to handle Android and friends. I can use sync-thing for that if I need to.
Left to my own devices I’d probably rsync for 90% of that, but I’d like to try something a little more pointy-clicky or at least transparent in my dotage.
Edit: Not SAMBA (I freaking hate trying to make that work)
Edit2: for the young’uns: NFS (linux “network filesystem”)
Edit 3: LAN only. I may set up a VPN connection one day but it’s not currently a priority. (edited post to reflect questions)
Last Edit: thanks, friends, for this discussion! I think based on this I’ll at least start with NFS + my existing backups system (Mint’s thing, which is I think just a gui in front of rcync). May play w/ modern SAMBA if I have extra time.
Ill continue to read the replies though - some interesting ideas.
NFS is the best option if you only need to access the shared drives over your LAN. If you want to mount them over the internet, there’s SSHFS.
See, this is interesting. I’m out here looking for the new shiny easy button, but what I’m hearing is “the old config-file based thing works really well. ain’t broken, etc.”
I may give that a swing and see.
I’m at the same age - just to mention, samba is nowhere near the horror show it used to be. That said, I use NFS for my Debian boxes and mac mini build box to hit my NAS, samba for the windows laptop.
Yeah, Samba has come a long way. I run a Linux based server but all clients are Windows or Android so it just makes sense to run SMB shares instead of NFS.
I’ve always had weird issues with SMB like ghost files, issues with case sensitivity (zfs pool), it dropping out and me having to reboot to re-establish the connection… Since switching to Linux and using NFS, it’s been almost indistinguishable from a native drive for my casual use (including using a ssd pool as a steam library…)
I can definitely say in the past I had similar experiences. I haven’t really had any problems with SMB in the last 5 years that I can recall. It really was a shit show back in the day, but it’s been rock solid for me anyway.
Same. I’ve used SMB for years. Don’t have any problems with it across all my Windows and Android devices. Pretty sure I had an iPad in there at one point as well.
I’ve run Proxmox hosts with smb shares for literally a decade without issue. Performance is line speed now. Only issues I’ve ever had were operator error and that was a long time ago. SMB 3 works great.
What about NFS over the internet?
You can use NFS over the internet, but it will be a lot more work to secure it. It was intended for use over a LAN and performance may not be great over the internet, especially with high latency or packet loss.
I would just create a point to point VPN connection and run it over that (for axample an IPsec tunnel using strongswan)
I agree, NFS is eazy peazy, livin greazy.
I have an old ds211j synology for backup. I just can’t bring myself to replace it, it still works. However, it doesn’t support zfs. I wish I could get another Linux running on this thing.
However, NFS does work on it and is so simple and easy to lock down, it works in a ton of corner cases like mine.
Afaik Synology supports Btrfs which I honestly prefer at this point if you don’t need filesystem based encryption or professionall scaling and caching features.
The ds211j is on synology DSM 6, which is ancient. I’ll look again, but I don’t think it supports btrfs.
NFS is easy as long as you use very basic access control. When you want NFSv4 with Kerberos auth you’re entering a world of pain and tears.
I don’t use access control, I lock down with networking and filters.
If you already know NFS and it works for you, why change it? As long as you’re keeping it between Linux machines on the LAN, I see nothing wrong with NFS.
Isn’t nfs pretty much completely insecure unless you turn on nfs4 with Kerberos? The fact that that is such a pain in the ass is what keeps me from it. It is fine for read-only though.
If you’ve got Tailscale it’ll build WireGuard tunnels directly over the LAN: I actually do this with Samba for Time Machine backups on macOS.
Obviously the big bonus is being able to do the same over the internet without the gaping security holes.
(I used to use split DNS so that my LAN’s router’s DNS server returned the LAN IP, and Tailscale’s DNS server returned the Tailscale IP. But because I’m a privacy geek I decided to make it Tailscale-only.)
It is, but nfsv3 is extremely easy to configure. You need to edit 1 line in 1 file and it’s ready to go.
Would be fine for designated storage networks that use IP whitelists.
Other than that, you kind of need user specific encryption/segregation (which I beliege Kerberos does?)
I think a reasonable quorum already said this, but NFS is still good. My only complaint is it isn’t quite as user-mountable as some other systems.
So…I know you said no SAMBA, but SAMBA 4 really isn’t bad any more. At least, not nearly as shit as it was.
If you want a easily mountable filesystem for users (e.g. network discovery/etc.) it’s pretty tolerable.
If it’s for backup, zfs and btrfs can send incremental diffs quite efficiently (but of course you’ll have to use those on both ends).
Otherwise, both NFS and SMB are certainly viable.
I tried both but TBH I ended up just using SSHFS because I don’t care about becoming and NFS/SMB admin.
NFS and SMB are easy enough to setup, but then when you try to do user-level authentication… they aren’t as easy anymore.
Since I’m already managing SSH keys all over my machines, I feel like SSHFS makes much more sense for me.
For smaller folders I like using syncthing, that way it’s like having multiple updated backups
I like this solution because I can have the need filled without a central server. I use old-fashioned offline backups for my low-churn, bulk data, and SyncThing for everything else to be eventually consistent everywhere.
If my data was big enough so as to require dedicated storage though, I’d probably go with TrueNAS.
Syncthing is neat, but you shouldn’t consider it to be a backup solution. If you accidentally delete or modify a file on one machine, it’ll happily propagate that change to all other machines.
I’d use an s3 bucket with s3fs. Since you want to host it yourself, Minio is the open-source tool to use instead of s3.
I hear good things about seaweedfs instead of minio these days
Oh, and if you want to use it as the backing store for a database consider obstore instead of s3fs: https://developmentseed.org/blog/2025-08-01-obstore/
For all its flaws and mess, NFS is still pretty good and used in production.
I still use NFS to file share to my VMs because it still significantly outperforms virtiofs, and obviously network is a local bridge so latency is non-existent.
The thing with rsync is that it’s designed to quickly compute the least amount of data transfer to sync over a remote (possibly high latency) link. So when it comes to backups, it’s literally designed to do that easily.
The only cool new alternative I can think of is, use btrfs or ZFS and
btrfs/zfs send | ssh backup btrfs/zfs recv
which is the most efficient and reliable way to backup, because the filesystem is aware of exactly what changed and can send exactly that set of changes. And obviously all special attributes are carried over, hardlinks, ACLs, SELinux contexts, etc.The problem with backups over any kind of network share is that if you’re gonna use rsync anyway, the latency will be horrible and take forever.
Of course you can also mix multiple things: rsync laptop to server periodically, then mount the server’s backup directory locally so you can easily browse and access older stuff.
For linux only, lan only shared drive NFS is probably the easiest you’ll get, it’s made for that usecase.
If you want more of a dropbox/onedrive/google drive experience, Syncthing is really cool too, but that’s a whole other architecture qhere you have an actual copy on all machines.
NFS is still the standard. Were slowly seeing better adoption of VFS for things like hypervisors.
Otherwise something like SFTPgo or Copyparty if you want a solution that supports pretty much every protocol.
I would say SMB is more the standard. It is natively supported in Linux and works a bit better for file shares.
NFS is better for server style workloads
I still have to use SAMBA as Win 11 hates NFS with a passion and we have Win 11 boxes here supplied as work machines so no changing. Also wifeys gaming rig is Windows as she don’t want to mess around getting stuff to work…
But hey - for everything else it is NFS with all of its weirdness, but it just works a bit better than SMBMy Windows machines seem to be just fine with the couple of NFS shares I use for easier cross-platform mounting on boot. It comes at the cost of some security, though, so I use it to share unimportant stuff I want to mount very freely, like some media libraries. I use SMB for the rest.
I’m curious, what’s the issue with NFS on Windows for you?
Oh it just doesn’t like it as much as SMB… has work mandated VPN on it for starters! I use a truenas box for most of the backups etc. and I just share the same dataset via SMB and NFS and locally only, so it is sorted, but NFS on the Win 11 box is just way flakier and drops often
Everyone forgets about WebDAV.
It’s a little jank, but it does work on Windows. If you copy a file in, it doesn’t show up in the file manager until you refresh. But it works.
It’s also multithreaded, which isn’t the case for SMB. This is especially good if you host it on SSDs.
I use sshfs.
LAN or internet?
Https is king for internet protocols.
LAN only. I may set up a VPN connection one day but it’s not currently a priority. (edited post to reflect)
NFS works, but http was designed for shitty internet. Keep that in mind. Owncloud or similar might be a good idea.
I have SFTPgo in a docker container with attached storage. Can access it through many protocols, but on linux I mount it via WebDav.
Whats neat is that I can also share files/folders with either other registered users or with a password or download only link and it has a web gui for that.
Sounds like NFS might still be the way to go for you.
For backups personally I use Restic and connect over SFTP via SSH, since that’s just built in and doesn’t need any configuration.
For more traditional file sharing I use WebDAV with SFTPGo, since I need windows and android compatibility too, and webdav is pretty easy to setup and use.
And I also use Syncthing for keeping some directories in sync between devices.