All the public Piped instances are getting blocked by YouTube but do small selfhosted instances, that are only used by a handful of users or just yourself, still working? Thinking of just selfhosting it.
On a side note, if I do it, I’d also like to install the new EFY redesign or is that branch too far behind?
Edit: As you can see in the replies, private instances still work. I also found the instructions for running the new EFY redesign here
My selfhosted instance is still working. I’m the only user which probably helps it from getting flagged. If they manage to kill Piped and the other similar options, YouTube is dead to me.
For me my self hosted version stopped working. From what the GitHub issues are saying, Google is starting to block instances if they get flagged.
Gonna add a dissenting “maybe but not really”. YT is really aggressive on this kinda stuff lately and the situation is changing month by month. YT has multiple ways of flagging your IP as potentially problematic and as soon as you get flagged you’re going to end up having to run quite an annoying mess of scripts that may or may not last in the long term. There’s some instructions in a stickied issue on the Invidious repo.
Yes. My private Instance works perfectly, and I’m so happy that I chose to selfhost it rn lol. And currently I’m on the quest to selfhost even more piped components, eg. RYD_Proxy, sponsorblock-mirror, make them buildable as a PKGBUILD and compatible with Unix sockets: https://git.30p87.de/piped
Thanks, I’m gonna selfhost it then. What you said about the piped components sounds interesting, is there like a list of them?
Compatible with Unix sockets?
I have dozens of services, and most of them start their own http server, using a regular websocket binding to localhost and a port. As most of them are web services, I run out of standard ports pretty fast - 80, 8000, 8080, and then 8069, 8070 etc. Keeping tracks is a pain. Docker just makes it worse. Also, all non-web services have standard ports - 25, 456 for smtp/smtps, which nmap identifies. In my current state, an attacker could just open a random port on my server and I couldn’t notice.
Unix sockets are basically just regular files, where http traffic is written to and read from. So eg. gitlab-puma or piped-proxy creates the file /run/gitlab/gitlab.socket respectively /run/piped/proxy.socket, and my reverse proxy (nginx) communicates through that socket with the service, just as it would through a regular websocket using localhost and a port. Except unix sockets are easily identifiable (they are named and put in dirs dependent on their service) and can be access controlled much better - instead of any service in the whole network (assuming no firewall is present on the device, usually behind a consumer grade router) being able to communicate with the service, only members of the group http (nginx) or the services’ user can read/write to the socket, assuming nginx is save and root, http and the services’ user are not compromised, not even an attacker with access to the server can read any traffic, as it’s encrypted (https) to nginx, and not readable to other users through the socket file. It’s also a bit more performant. The catch is: Very few programs support it, and many of the ones that do implement it incorrectly. Usually I would create a specific user for a service (or sysusers.conf file would), under which the service runs in systemd, and which therefore owns the socket file. The http user is then added to the group of the services user, or the file’s group is set to http. With 770 (or 660) permissions (Read and write for the user and all users in the group, including http) everything would be fine, however, they’re usually 755, so only actually writeable for the owner, and not the group, so http, which makes communication impossible. And as just creating the file with correct ownership and permissions beforehand results in the service believing the socket is already in use, I usually have to patch the actual program itself. Maybe I can do something with systemd’s PostExec etc. tho.
And piped-backends library does not support unix sockets at all - so I will need to extend the incredibly complicated library itself to get what I want. Damn.
So basically you’re using Unix sockets on your LAN level between nginx and internal machines for finer grained access control and because you’re running out of ports. That’s really cool! I’ll have to read into this myself.