I work in tech and am constantly finding solutions to problems, often on other people’s tech blogs, that I think “I should write that down somewhere” and, well, I want to actually start doing that, but I don’t want to pay someone else to host it.
I have a Synology NAS, a sweet domain name, and familiarity with both Docker and Cloudflare tunnels. Would I be opening myself up to a world of hurt if I hosted a publicly available website on my NAS using [insert simple blogging platform], in a Docker container and behind some sort of Cloudflare protection?
In theory that’s enough levels of protection and isolation but I don’t know enough about it to not be paranoid about everything getting popped and providing access to the wider NAS as a whole.
Update: Thanks for the replies, everyone, they’ve been really helpful and somewhat reassuring. I think I’m going to have a look at Github and Cloudflare’s pages as my first port of call for my needs.
I’ll let folks with more security experience dive into your specific question, but another option is to host your website on something like Github pages (using a static website generator like Jekyll and point Cloudflare at it. That way you don’t need anything pointed at your local network, get the uptime of Github, and still benefit from your own domain name.
That’s what I’m doing with my own blog and it’s been great. Github provides the service for free but if they ever charge for it I’ll just start hosting it locally.
OK that’s genius, I will definitely look into that!
Or take github out of the equation and directly use cloudflare pages. It has its own pros and cons, but for a simple static blog it’ll be more than enough, and takes out the CNAME hassle.
Came here to say this^
If you have any issues or questions feel free to DM me here. I’d be happy to help out :)
Speaking of Cloudflare, if you’re okay with not self hosting, then there’s Cloudflare Pages which is good for hosting static websites.
That’s what I’m doing! I used it to make a “blog” of all the things I had to learn to switch to Linux for my home drives and daily gaming rig. Complete with copy buttons on the code blocks so I can do a complete reformat in minutes!
I do this via AWS amplify and it costs me a few cents a month as another option.
I know it’s not technically “self” hosted but I’d get a cheap yearly VPS somewhere and run a webserver off of that.For me its worth the peace of mind to keep my network a temple instead of a bus terminal. I paid $13 usd for the year for mine
I believe Oracle is still offering to slice off a bit of compute for free that should accomplish OP’s goal. I’ve used it to test a Jellyfin host among other things and for the price it can’t be beat!
I’ve been running a script every 60 seconds for 2 months now as a cron job and it still hasn’t been able to create a VM in their US datacenter. I just have a log full of “insufficient host capacity” errors.
+1 for VPS, the ionos ones are $2/mo and have unlimited bandwidth at 400mbps. That’s basically the cost of electricity for a home server with orders of magnitude better reliability.
A VPS makes sense insofar as keeping things thoroughly isolated from my own systems, but the overhead of maintaining a box that’s directly connected to the Internet like that isn’t something I’m keen on and I’m not convinced I’d have the expertise to do it right from the outset.
Change the ssh port to something with 4-5 digits, disable ssh password Auth and use certificates only, don’t expose any port other than 443.
If you’re paranoid, use cloudflare as a proxy and set the VPS firewall to only accept incoming traffic from cloudflares ip list.
That’s about it really.
Changing port is security by obscurity and it doesn’t take much time for botnets to scan all of IPV4 space on all ports. See for example the ever updated list that’s available on Shodan.
Disable password login and use certificates as you’ve suggested already, add fail2ban to block random drive-bys, and you’re off to the races.
The Oracle Cloud VPS only has SSH key authentication enabled by default. You can also set it to only allow SSH from your home IP in the virtual firewall before the machine is ever spun up.
Their current free ARM offering is 1 machine with 4-cores and 24gb RAM for life. You can also add another 2 AMD machines with 1-core and 1gb RAM and still be in their free-tier.
If you’re going to set it up and take advantage of the ARM machine, make sure you pick a home location for your account that has multiple availability zones. San Fran right now only has 1 zone, so if the shared ARM instances are all used up, you’ll have to wait a few days and try again. Phoenix I think has 3, so you can try with another zone right away.
I guess I’m extremely paranoid then, my home IP doesn’t change much and I just expose the port only to it from Oracle’s site. I rarely touch mine though.
I just restrict SSH to an internal VPN IP on all my servers (ZeroTier). 100% impossible to even try logging into them unless you’ve managed to crack into my network first.
The first worry are vectors around the Synology, It’s firmware, and network stack. Those devices are very closely scrutinized. Historically there have been many different vulnerabilities found and patched. Something like the log4j vulnerabilities back in the day where something just has to hit the logging system too hit you might open a hole in any of the other standard software packages there. And because the platform is so well known, once one vulnerability is found they already know what else exists by default and have plans for ways to attack it.
Vulnerabilities that COULD affect you in this case for few and far between but few and far between are how things happen.
The next concern you’re going to have are going to be someone slipping you a mickey in a container image. By and large it’s a bunch of good people maintaining the container images. They’re including packages from other good people. But this also means that there is a hell of a lot of cooks in the kitchen, and distribution, and upstream.
To be perfectly honest, with everything on auto update, cloud flares built-in protections for DDOS and attacks, and the nature of what you’re trying to host, you’re probably safe enough. There’s no three letter government agency or elite hacker group specifically after you. You’re far more likely to accidentally trip upon a zero day email image filter /pdf vulnerability and get bot netted as you are someone successfully attacking your Argo tunnel.
That said, it’s always better to host in someone else’s backyard than your own. If I were really, really stuck on hosting in my house on my network, I probably stand up a dedicated box, maybe something as small as a pi 0. I’d make sure that I had a really decent router / firewall and slip that hosting device into an isolated network that’s not allowed to reach out to anything else on my network.
Assume at all times that the box is toxic waste and that is an entry point into your network. Leave it isolated. No port forwards, you already have tunnels for that, don’t use it for DNS don’t use it for DHCP, Don’t allow You’re network users or devices to see ARP traffic from it.
Firewall drops everything between your home network and that box except SSH in, or maybe VNC in depending on your level of comfort.
Are you my brain? This exactly the sort of thing I think about when I say I’m paranoid about self-hosting! Alas, as much as I’d like to be able to add an extra box just for that level of isolation it’d probably take more of a time commitment than I have available to get it properly setup.
The attraction of docker containers, of course, is that they’re largely ready to go with sensible default settings out of the box, and maintenance is taken care of by somebody else.
Oh yeah, I totally get the allure of containers. I use them myself just not in production.
To be fair, python and node both suffer from the same kind of worries. And stuff gets slipped into those repos not too infrequently.
Can i ask you to elaborate on this part
Assume at all times that the box is toxic waste and that is an entry point into your network. Leave it isolated. No port forwards, you already have tunnels for that, don’t use it for DNS don’t use it for DHCP, Don’t allow You’re network users or devices to see ARP traffic from it.
I used to have a separate box, but the only thing it did was port forwarding
Specifically i don’t really understand the topology of this setup, and how do i set it up
Cloudflare tunnel is a thin client that runs on your machine to Cloudflare; when there’s a request from outside to Cloudflare, it relays it via the established tunnel to the machine. As such, your machine only need outbound internet access (to Cloudflare servers) and no need for inbound access (I.e. port forwarding).
Thank you for your reply, but i actually was asking about the network stuff 😅
I used to use cloudflare tunnels for many years, now i’m a bit too tin foiled to use any cloudflare 😅
Ah sorry I went down the wrong rabbit hole.
I’d imagine an isolated VLAN should be sufficient good starting point to prevent anyone from stumbling on to it locally, as well as any potential external intruder stumbling out of it?
You need to have a rather capable router / firewall combo.
You could pick up a ubiquity USG. Or set up something with an isp router and a PF sense firewall.
You need to have separate networks in your house. And the ability to set firewall rules between the networks.
The network that contains the hosting box needs to have absolutely no access to anything else in your house except it’s route out to the internet. Don’t have it go to your router for DHCP set it up statically. Don’t have it go to your router for DNS, choose an external source.
The firewall rules for that network are allow outbound internet with return traffic, allow SSH and maybe VNC from your home network, then deny all.
The idea is that you assume the box is capable of getting infected. So you just make sure that the box can live safely in your network even if it is compromised.
(I just noticed i replied to your another comment, but still to you 😬)
Now i’m a little bit confused, what does it do then?
If the box doesn’t have access to anything on the network, how would it do anything?
The box you’re hosting on only needs internet access to connect the tunnel. Cloudflare terminates that SSL connection right in a piece of software on your web server.
I mean, what does it host if the only thing it has access to is the internet?
Cloudflare tunnels are layer 7, so it’s not unlimited access by any means. This also means that certain things will break btw, for example if your website uses websockets to load information, that isn’t supported.
Next, I’d put the computer that is going to be hosting into an isolated vlan of its own and access via external URL only.
If you’re going to use docker images, make sure to vet that they’re updated often and always spin up the latest.
CF tunnels are layer 3, not 7 and they have support for web sockets. It’s basically wireguard VPN with a few extras built on top.
https://developers.cloudflare.com/cloudflare-one/faq/cloudflare-tunnels-faq/
That document doesn’t say what layer. But it does say it supports Websockets.
Just odd that when I try to set it up using a named tunnel I don’t get an option to specify the WS service type. However it does require a service type if you want to connect to it.
Looking at this page it would seem that it’s a layer 7. Although I could be wrong, but my front end app has issues finding my backend service for websockets.
Granted I even tried to connect to my private computer using other protocols. I couldn’t get through. Anyway I’m most likely going to be taking that project offline soon.
No, but I thought I clarified that when I said it’s basically wireguard VPN which operates using tcp/udp (layer 3.) layer 7 is stuff like https. CF tunnels are lower level.
Page you linked is missing the layer between CF and source server so it doesn’t indicate layer. You can lookup wireguard protocol if you want more details.
You’ll be fine enough as long as you enable MFA on your Nas, and ideally configure it so that anything “fun”, like administrative controls or remote access, are only available on the local network.
Synology has sensible defaults for security, for the most part. Make sure you have automated updates enabled, even for minor updates, and ensure it’s configured to block multiple failed login attempts.
You’re probably not going to get hackerman poking at your stuff, but you’ll get bots trying to ssh in, and login to the WordPress admin console, even if you’re not using WordPress.
A good rule of thumb for securing computers is to minimize access/privilege/connectivity.
Lock everything down as far as you can, turn off everything that makes it possible to access it, and enable every tool for keeping people out or dissuading attackers.
Now you can enable port 443 on your Nas to be publicly available, and only that port because you don’t need anything else.
You can enable your router to forward only port 443 to your Nas.It feels silly to say, but sometimes people think “my firewall is getting in the way, I’ll turn it off”, or “this one user needs read access to one file, so I’ll give read/write/execute privileges to every user in the system to this folder and every subfolder”.
So as long as you’re basically sensible and use the tools available, you should be fine.
You’ll still poop a little the first time you see that 800 bots tried to break in. Just remember that they’re doing that now, there’s just nothing listening to write down that they tried.However, the person who suggested putting cloudflare in front of GitHub pages and using something like Hugo is a great example of “opening as few holes as possible”, and “using the tools available”.
It’s what I do for my static sites, like my recipes and stuff.
You can get a GitHub action configured that’ll compile the site and deploy it whenever a commit happens, which is nice.If it’s a static site, you can host that anywhere for free on the big cloud providers, aws has s3 storage, Microsoft has blobs, github has pages, all which can be configured to run a site well under the paid tiers.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters CF CloudFlare DNS Domain Name Service/System IP Internet Protocol SSH Secure Shell for remote terminal access SSL Secure Sockets Layer, for transparent encryption VNC Virtual Network Computing for remote desktop access VPN Virtual Private Network VPS Virtual Private Server (opposed to shared hosting)
8 acronyms in this thread; the most compressed thread commented on today has 10 acronyms.
[Thread #384 for this sub, first seen 29th Dec 2023, 14:55] [FAQ] [Full list] [Contact] [Source code]
If you’re exposing via cloudflare tunnels instead of pointing at your public IP then eveything other people have said covers it. If you are using your public IP then it’s worth blocking non-cloudflare IPs from accessing the site directly
I’m definitely a fan of Gitlab pages for simple webpages I just want on the Internet. It’s nice to have the code hosted anyways (gives me that off site back up safety so my stuff at home can go down if needed).
if you setup everything with even moderate attention to the security involved, youll be fine. sounds like youre already there.
this is a common scenario, not a crazy idea or implementation. just keep your shit up to date
That’s one of the issues I’m concerned about. I’m happy enough to let things auto-update on a tight schedule and capable enough to fix things if eg. Watchtower goes wrong or updates a container to a dodgy version, but what I don’t want is to have “keeping things secure” turn into a second job.
I run plenty of stuff off my home network, although I use VPSs now more for the higher availability than residential internet. So long as you put basic protections in place like fail2ban and a sensible firewall, you shouldn’t have any issues.
One option here is to host it internally, and then VPN or ssh tunnel to your network for access.
Keeping openssh or a VPN up to date and secure is a much simpler thing than a web framework.
Separate your network access and your services. You get in trouble trying to use your service to gate access to your network.
New Lemmy Post: How safe is self-hosting a public website behind Cloudflare? (https://lemmy.world/post/10093430)
Tagging: #SelfHosted(Replying in the OP of this thread (NOT THIS BOT!) will appear as a comment in the lemmy discussion.)
I am a FOSS bot. Check my README: https://github.com/db0/lemmy-tagginator/blob/main/README.md
If you concerned about your exposed services being hacked, why not learn how to protect them properly from bad actors? There exists a wide range of solutions that attempt to specifically solve this problem.
why not learn how to protect them properly from bad actors?
Exactly. One way to start is asking for help on forum with people who like to talk about this kind of thing. Hope OP finds their way.
Not exactly. OP mentions he’s interested in using cloudflare/github pages where the security is managed by those platforms not the user.