What’s your go too (secure) method for casting over the internet with a Jellyfin server.
I’m wondering what to use and I’m pretty beginner at this
Jellyfin isn’t secure and is full of holes.
That said, here’s how to host it anyway.
- Wireguard tunnel, be it tailscale, netbird, innernet, whatever
- A vps with a proxy on it, I like Caddy
- A PC at home with Jellyfin running on a port, sure, 8096
If you aren’t using Tailscale, make your VPS your main hub for whatever you choose, pihole, wg-easy, etc. Connect the proxy to Jellyfin through your chosen tunnel, with ssl, Caddy makes it easy.
Since Jellyfin isn’t exactly secure, secure it. Give it its own user and make sure your media isn’t writable by the user. Inconvenient for deleting movies in the app, but better for security.
more…
Use fail2ban to stop intruders after failed login attempts, you can force fail2ban to listen in on jellyfin’s host for failures and block ips automatically.
More!
Use Anubis and yes, I can confirm Anubis doesn’t intrude Jellyfin connectivity and just works, connect it to fail2ban and you can cook your own ddos protection.
MORE!
SELinux. Lock Jellyfin down. Lock the system down. It’s work but it’s worth it.
I SAID MORE!
There’s a GeoIP blocking plugin for Caddy that you can use to limit Jellyfin’s access to your city, state, hemisphere, etc. You can also look into whitelisting in Caddy if everyone’s IP is static. If not, ddns-server and a script to update Caddy every round? It can get deep.
Again, don’t do any of this and just use Jellyfin over wireguard like everyone else does(they don’t).
Wow, a “for dummies” guide for doing all this would be great 😊 know of any?
If you aren’t already familiarized with the Docker Engine - you can use Play With Docker to fiddle around, spin up a container or two using the
docker run
command, once you get comfortable with the command structure you can move into Docker Compose which makes handling multiple containers easy using.yml
files.Once you’re comfortable with compose I suggest working into Reverse Proxying with something like SWAG or Traefik which let you put an domain behind the IP, ssl certificates and offer plugins that give you more control on how requests are handled.
There really is no “guide for dummies” here, you’ve got to rely on the documentation provided by these services.
i would also love more details about accomplishing some of that stuff
Nginx in front of it, open ports for https (and ssh), nothing more. Let’s encrypt certificate and you’re good to go.
I would not publicly expose ssh. Your home IP will get scanned all the time and external machines will try to connect to your ssh port.
So? Pubkey login only and fail2ban to take care of resource abuse.
They can try all they like, man. They’re not gonna guess a username, key and password.
Doesn’t take that to leverage an unknown vulnerability in ssh like:
That’s why it’s common best practice to never expose ssh to raw internet if you can help it; but yes it’s not the most risky thing ever either.
If you’re going to open something, SSH is far, far more battle-tested than much other software, even popular software. Pragmatically, If someone is sitting on a 0-day for SSH, do you genuinely think they’re gonna waste that on you and me? Either they’re gonna sell it to cash out as fast as possible, or they’ll sit on it while plotting an attack against someone who has real money. It is an unhealthy level of paranoia to suggest that SSH is not secure, or that it’s less secure than the hundreds of other solutions to this problem.
Here is my IP address, make me eat my words.
2a05:f6c7:8321::164 | 89.160.150.164Are you giving random strangers legal permission to pentest you? That’s bold.
You got balls to post you public addresses like that… I mean I agree with you wholeheartedly and I also have SSH port forwarded on my firewall, but posting your public IP is next-level confidence.
Respect.
That is some big dick energy ngl
I linked a relevant vulnerability, but even ignoring that, pragmatically, you feel they’d be targeting specific targets instead of just what they currently do? (That, by the way, is automating the compromise of vulnerable clients in mass scale to power botnets). Any service you open on your device to the internet is inherently risky. Ssh best practices are, and have been since the early days, not to expose it to the internet directly.
Only the failed attempts could be a Denial Of Service and throw you out. So, at least add an ever increasing delay to those. Fail2ban is important.
Sorry, misunderstanding here, I’d never open SSH to the internet, I meant it as “don’t block it via your server’s firewall.”
Change the port it runs on to be stupid high and they won’t bother.
Yeah hey what’s your IP address real quick? No reason
In 3 years I haven’t had a single attempted connection that wasn’t me. Once you get to the ephemeral ports nobody is scanning that high.
I’m not saying run no security or something. Just nobody wants to scan all 65k ports. They’re looking for easy targets.
Just nobody wants to scan all 65k ports.
Shodan has entered the chat.
Why would you need to expose SSH for everyday use? Or does Jellyfin require it to function?
Maybe leave that behind some VPN access.
Cool if I understand only some of things that you have said. So you have a beginner guide I could follow?
Take a look at Nginx Proxy Manager and how to set it up. But you’ll need a domain for that. And preferably use a firewall of some sort on your server and only allow said ports.
I’ve look a little on it, didn’t understand most of it. I’m looking for a comprehensive beginner guide before going foward
This isn’t a guide, but any reverse proxy allows you to limit open ports on your network (router) by using subdomains (thisPart.website.com) to route connections to an internal port.
So you setup a rev proxy for jellyfin.website.com that points to the port that jf wants to use. So when someone connects to the subdomain, the reverse proxy is hit, and it reads your configuration for that subdomain, and since it’s now connected to your internal network (via the proxy) it is routed to the port, and jf “just works”.
There’s an ssl cert involved but that’s the basic understanding. Then you can add Some Other Services at whatever.website.com and rinse and repeat. Now you can host multiple services, without exposing the open ports directly, and it’s easy for users as there is nothing “confusing” like port numbers, IP addresses, etc.
If it’s just so you personally can access it away from home, use tailscale. Less risky than running a publicly exposed server.
I see everyone in this thread recommending a VPN or reverse proxy for accessing Jellyfin from outside the LAN. While I generally agree, I don’t see a realistic risk in exposing Jellyfin directly to the internet.
It supports HTTPS and certificates nowadays, so there’s no need for outside SSL termination anymore.(See Edit 2)In my setup, which I’ve been running for some time, I’ve port-forwarded only Jellyfin’s HTTPS port to eliminate the possibility of someone ending up on pure HTTP and sending credentials unencrypted. I’ve also changed the Jellyfin’s default port to a non-standard one to avoid basic port-scanning bots spamming login attempts. I fully understand that this falls into the security through obscurity category, but no harm in it either.
Anyone wanna yell at me for being an idiot and doing everything wrong? I’m genuinely curious, as the sentiment online seems to be that at least a reverse proxy is almost mandatory for this kind of setup, and I’m not entirely sure why.
Edit: Thank you everyone for your responses. While I don’t agree with everything, the new insight is appreciated.
Edit 2: I’ve been informed that infact the support for HTTPS will be removed in a future version. From v10.11 release notes:
Deprecation Notice: Jellyfin’s internal handling of TLS/SSL certificates and configuration in the web server will be removed in a future version. No changes to the current system have been made in 10.11, however future versions will remove the current system and instead will provide advanced instructions to configure the Kestrel webserver directly for this relatively niche usecase. We strongly advise anyone using the current TLS options to use a Reverse Proxy for TLS termination instead if at all possible, as this provides a number of benefits
Anyone wanna yell at me for being an idiot and doing everything wrong?
Not yell, but: Jellyfin is dropping HTTPS support with a future update so you might want to read up on reverse proxies before then.
Additionally, you might want to check if Shodan has your Jellyfin instance listed: https://www.shodan.io/
Jellyfin is dropping HTTPS support with a future update[…]
What’s the source for this? I wasn’t able to find anything with a quick google search
Upgrade notes for 10.11 RCs
Thanks. This is kinda important info so I’ve edited my initial comment.
They are not saying anything on why they are removing it.
Jellyfin has a whole host of unresolved and unmitigated security vulnerabilities that make exposing it to the internet. A pretty poor choice.
And which one of those are actually vulnerabilities that are exploitable? First, yes ofc unauthenticated endpoints should be fixed, but with those there is no real damage to be done.
If you know the media path then you can request a playback, and if you get the user ids then you can get all users. That’s more or less it.
Good? No. But far from making it a poor choice exposing it.
These are all holes in the Swiss cheese model.
Just because you and I cannot immediately consider ways of exploiting these vulnerabilities doesn’t mean they don’t exist or are not already in use (Including other endpoints of vulnerabilities not listed)
This is one of the biggest mindset gaps that exist in technology, which tends to result in a whole internet filled with exploitable services and devices. Which are more often than not used as proxies for crime or traffic, and not directly exploited.
Meaning that unless you have incredibly robust network traffic analysis, you won’t notice a thing.
There are so many sonarr and similar instances out there with minor vulnerabilities being exploited in the wild because of the same"Well, what can someone do with these vulnerabilities anyways" mindset. Turns out all it takes is a common deployment misconfiguration in several seedbox providers to turn it into an RCE, which wouldn’t have been possible if the vulnerability was patched.
Which is just holes in the swiss cheese model lining up. Something as simple as allowing an admin user access to their own password when they are logged in enables an entirely separate class of attacks. Excused because “If they’re already logged in, they know the password”. Well, not of there’s another vulnerability with authentication…
See how that works?
Nah, setting non-standard ports is sound advice in security circles.
People misunderstand the “no security through obscurity” phrase. If you build security as a chain, where the chain is only as good as the weakest link, then it’s bad. But if you build security in layers, like a castle, then it can only help. It’s OK for a layer to be weak when there are other layers behind it.
Even better, non-standard ports will make 99% of threats go away. They automate scans that are just looking for anything they can break. If they don’t see the open ports, they move on. Won’t stop a determined attacker, of course, but that’s what other layers are for.
As long as there’s real security otherwise (TLS, good passwords, etc), it’s fine.
If anyone says “that’s a false sense of security”, ignore them. They’ve replaced thinking with a cliche.
People misunderstand the “no security through obscurity” phrase. If you build security as a chain, where the chain is only as good as the weakest link, then it’s bad. But if you build security in layers, like a castle, then it can only help. It’s OK for a layer to be weak when there are other layers behind it.
And this is what should be sung from the hills and mountaintops. There’s some old infosec advice that you should have two or three honeypots, buried successively deeper behind your security, and only start to worry when the second or third gets hit; The first one getting hit simply means they’re sniffing around with automated port scanners and bots. They’re just throwing common vulnerabilities at the wall to see if any of them stick. The first one is usually enough for them to go “ah shit I guess I hit a honeypot. They must be looking for me now. Never mind.” The second is when you know they’re actually targeting you specifically. And the third is when you need to start considering pulling plugs.
It’s difficult to say exactly what all a reverse proxy adds to the security conversation for a handful of reasons, so I won’t touch on that, but the realistic risk of exposing your jellyfin instance to the internet is about the same as handing your jellyfin api over to every stranger globally without giving them your user account or password and letting them do whatever they’d like for as long as they’d like. This means any undiscovered or unintentional vulnerability in the api implementation could easily allow for security bypass or full rce (remote code execution, real examples of this can be found by looking at the history of WordPress), but by siloing it behind a vpn you’re far far far more secure because the internet at large cannot access the apis even if there is a known vulnerability. I’m not saying exposing jellyfin to the raw web is so risky it shouldn’t be done, but don’t buy into the misconception that it’s even nearly as secure as running a vpn. They’re entirely different classes of security posture and it should be acknowledged that if you don’t have actual use for internet level access to jellyfin (external users, etc, etc) a vpn like tailscale or zero tier is 100% best practice.
It feels like everything is a tradeoff and I think a setup like this reduces the complexity for people you share with.
If you added fail2ban along with alert email/notifications you could have a chance to react if you were ever targeted for a brute force attempt. Jellyfin docs talk about setting this up for anyone interested.
Blocking IP segments based on geography of countries you don’t expect connections from adds the cost of a VPN for malicious actors in those areas.
Giving Jellyfin its own VLAN on your network could help limit exposure to your other services and devices if you experience a 0day or are otherwise compromised.
Fail2ban isn’t going to help you when jellyfin has vulnerable endpoints that need no authentication at all.
Your comment got me looking through the jellyfin github issues. Are the bugs listed for unauthenticated endpoints what you’re referencing? It looks like the 7 open mention being able to view information about the jellyfin instance or view the media itself. But this is just what was commented as possible, there could be more possibilities especially if combined with other vulnerabilities.
Now realizing there are parts of Jellyfin that are known to be accessible without authentication, I’m thinking Fail2ban is going to do less but unless there are ways to do injection with the known bugs/a new 0day they will still need to brute force a password to be able to make changes. I’m curious if there is anything I’m overlooking.
You remember when LastPass had a massive leak and it out of their production source code which demonstrated that their encryption security was horrible? That was a Plex vulnerability. All it takes is a zero day and one of the packages they’re using and you’re a prime target for ransomware.
You can see from the number of unauthenticated processes in their security backlog that security really has been an afterthought.
Unless you’re running in a non-privileged container with read only media, I definitely would not put that out on the open network.
I think the reason why its generally suggested to use a VPN is because it reduces the risk of intrusion to almost zero. Folks that are not network/sys admin savy would feel safer with the lowest risk solution. Using the port forward method, there could be configuration mistakes made which would unintentionally expose a different service or parts of their home network they don’t want exposed. And then there’s the possibility of application vulnerabilities which is less of an issue when only VPN users can access the application. That being said, I do expose some services via port forwarding but that’s only because I’m comfortable with ensuring its secure.
Reverse proxy is really useful when you have more than one service to expose to the internet because you only have to expose one port. It also automates the certificate creation & simplifies firewall rules inside the home network
I host it publicly accessible behind a proper firewall and reverse proxy setup.
If you are only ever using Jellyfin from your own, wireguard configured phone, then that’s great; but there’s nothing wrong with hosting Jellyfin publicly.
I think one of these days I need to make a “myth-busting” post about this topic.
Please do so, it’ll be very useful
I rent a cheap $5/mo VPS and use it to run a wireguard server with wgeasy and nginx proxy manager. Everything else runs on my home server connected by wireguard.
Tailscale + Caddy (automatic certificates FTW).
I use a wire guard tunnel into my Fritz box and from there I just log in because I’m in my local network.
I access it through a reverse proxy (nginx). I guess the only weak point is if someone finds out the domain for it and starts spamming the login screen. But I’ve restricted access to the domain for most of the world anyway. Wireguard would probably be more secure but its not always possible if like on vacation and want to use it on the TV there…
Wireguard vpn into my home router. Works on android so fire sticks etc can run the client.
Synology worked for me. They have built in reverse proxy. As well as good documentation to install it on their machine. Just gotta configure your wifi router to port forward your device and bam you’re ready to rock and roll
Is putting it behind an Oauth2 proxy and running the server in a rootless container enough?
Over the top for security would be to setup a personal VPN and only watch it over the VPN. If you are enabling other users and you don’t want them on your network; using a proxy like nginx is the way.
Being new to this I would look into how to set these things up in docker using docker-compose.
My router has a VPN server built-in. I usually use that.