I assume it’s not nvidia. Yet I have no idea how to differentiate between them and neither do I know what a good price is.
Let’s say I don’t want to think about what the video type is. I just want a smooth experience.
Edit: thank you guys!
I hear good things about the Intel Arc A380. You basically only need it to convert video and the Intel is not too bad at that for not too steep a price
Thx. 130€? That’s surprisingly cheap.
deleted by creator
I’ve got one and it handles about 3 transcodes from x264 to AV1 fairly effortlessly.
It’s a champ for Jellyfin. It’s 2024 and Intel produced a really good GPU… the world is weird. :P
https://www.tomshardware.com/reviews/intel-arc-a380-review/5
Blows the 6950XT and 3090 out of the water in transcoding performance. I would say that is performing very well. That was before drivers have gotten much much better too probably a bigger difference now.
I have one, it is fantastic.
Someone said that it is “not terribly performent” but it doesn’t matter for transcoding. It can do multiple 4k streams of AV1 & HEVC. That is perfect.
According to benchmarks, it beat the 3080 and 6800XT when it was released for transcoding performance. That is what you have to look at in this case, you aren’t gaming on it.
Just remember to enable all of the correct kernel modules to get it working. You often have to manually download the firmware git repo and move it to the firmware folder in Debian to get it working.
How do I need to configure jellyfin in order to work properly?
I added
device: - /dev/dri
And I tried
/dev/dri/renderD128
but both don’t work. Moreover, I enabled encoding in HEVC format, hardware encoding and selected hardware accelleration with intel quicksync (QSV) and enabled hardware decoding for H264, HEVC, …But if that’s enabled, transcoding doesn’t work at all on the player.
I guess I fail at. any advice?
podman exec -it jellyfin /usr/lib/jellyfin-ffmpeg/vainfo Trying display: drm error: failed to initialize display
I managed to enable it by giving itnprivileged access.
Intel Arc A310. They’re $100, support AV1 and powered completely by the PCIe bus. Combine it with Tdarr and you can compress your media library down to half the size easily while still being able to easily stream to any device you have.
Side note. Don’t use hardware acceleration with TDARR. You will get much better encodes with software encoding, which is great for archival and saving storage.
Use hardware acceleration with Jellyfin for transcoding code on the fly for a client that needs it.
If you know what your client specs are, you can use TDARR to reencode everything to what they need and then you won’t have to transcode anything with Jellyfin.
What problem are you trying to solve?
I try to solve the problem that the cpu isn’t powerful enough wheras my gpu isn’t supported and I need a new one and I want to have a seemless experience.
What CPU do you have?
I’ve run Jellyfin without transcoding from a raspberry pi for 2-3 streams without stuttering.
Without transcoding that works but if there’s something off with the video file it stutters
That will happen, yes, but the bigger question then is what’s wrong with the video file.
No idea :) it works on my phone…
The most impressed I’ve been with hardware encoding and decoding is with the built in graphics on my little NUC.
I’m using a NUC10i5FNH which was only barely able to transcode one vaguely decent bitrate stream in software. It looked like passing the hardware transcoding through to a VM was too messy for me so I decided to reinstall linux straight on the hardware.
The hardware encoding and decoding performance was absolutely amazing. I must have opened up about 20 jellyfin windows that were transcoding before I gave up trying and called it good enough. I only really need about 4 maximum.
The graphics on the 10th generation NUC’s is the same sort of thing that is on the 9th gen and 10th gen desktop cpu’s, so if you have and intel cpu with onboard graphics give it a try.
It’s way less trouble than the last time I built a similar setup with NVidia. I haven’t tried a Radeon card yet, but the jellyfin docs are a bit more negative about AMD.
I didn’t even know jellyfin had hw transcode till this post but I’m with this guy, Intel’s qsv is great. I have my plex server running bare metal on an gen 2 HP chromebox. It’s dual core but hw transcode with Intel QSV will do like 20+ 1080p streams.
This. I used a P1000 for transcoding and eventually switched to a 12th gen Intel chip with integrated UHD 770 graphics. It completely blew me away. Insanely low power draw and barely breaks a sweat transcoding multiple streams. Consider this route over a GPU if you can.
Intel integrated graphics is pretty phenomenal for ~5 user HTPC setups and NUC’s are basically the best Intel products ever. Nothing better than it just working out of the box.
Intel integrated graphics or if you want to go overkill go with an Arc GPU.
Avoid AMD
Avoid AMD? Why do you say that?
It is terrible for media hardware acceleration. I’m saying that out of both personal experience and the Jellyfin wiki
I’ve been using my 6700XT for about 4 years with Emby and have had 0 issues
For all of my streaming devices it re encodes to h264 which AMD can easily do. Have yet to have any files fail to decode/encode
No good hardware acceleration for video.
Do you have first hand experience?
My 6700XT is calling bullshit
5700 in my server works just fine too, no difficulty setting it up. Running in Docker. Even does HDR tone mapping!
But nVidia had better quality!! - some jackass who doesn’t know their ass from their elbow
I’d look into AV1 decoding benchmarks, regardless of NVIDIA vs AMD, as I’ve been using NVIDIA on Jellyfin for a while with no issues.
HEVC is not as relevant IMO, as it’s not available through browsers due to license restrictions (ffmpeg / mpv works fine), so I’d focus on AV1 capabilities, which is not available in many cards.
I can’t get my nvidia to work with it :(
What model of graphics card are you having problems with?
NVIDIA Corporation TU116 GeForce GTX 1650
AV1 decode is supported on the RTX 3000 series, encode + decode in the RTX 4000 series
For Intel Arc, AV1 encode + decode support is present on all Arc Alchemist GPUs,
For AMD, AV1 decode is on RX 6000 series, encode + decode on all RX 7000 series GPUs
As someone else has recommended, a low end Intel Arc alchemist GPU is pretty great for stuff like Jellyfin, very low price to entry for gfx accelerated AV1 transcoding.
The nvidia 1650 can’t do AV1 but it can handle hevc just fine, I’m currently using a 1660 on mine and before that it was a 950. Unless you need more than 3 steams at a time you should be able to get it working.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters Git Popular version control system, primarily for code NUC Next Unit of Computing brand of Intel small computers PCIe Peripheral Component Interconnect Express
3 acronyms in this thread; the most compressed thread commented on today has 5 acronyms.
[Thread #742 for this sub, first seen 8th May 2024, 06:15] [FAQ] [Full list] [Contact] [Source code]
I’m using nvidia right now with a 3060. It doesnt use much power, I got it for pretty cheap on ebay, and it encodes/decodes everything except for av1 encoding which I dont have use for. Looking at the charts in the link below, if you need to encode av1 you’ld need a 4000 series.
https://developer.nvidia.com/video-encode-and-decode-gpu-support-matrix-new
I’ve found nvidia to work pretty well for jellyfin, I use docker with the nvidia container toolkit and it just worked with hardware encoding out of the gate. I also have some other docker containers running gen ai and the 3060 handles them well as long as the modle will fit in vram.
That’s not power efficient
I think it entirely depends on your use case and hardware. I have a rack server, I need the extra power relatively frequently, as well as the 16x 2.5" bays and the 4 NICs. A rack server is a fairly power efficient package to get all those features in. However, it means that I am limited to discrete graphics, as Xeons don’t have Intel QSV. There’s also no monitor connected, and no 3D rendering happening, so the card is gonna idle at >5W and probably only use 20-30W while transcoding. Compared to a system that’s idling at ~250W that’s nothing.
My RX580 does the job just fine. Does 1080p at 3x realtime for HEVC, and 10x for h.264.
They’re dirt cheap second hand.
New Lemmy Post: What’s a good graphics card for jellyfin? (https://lemmyverse.link/lemmy.world/post/15099079)
Tagging: #SelfHosted(Replying in the OP of this thread (NOT THIS BOT!) will appear as a comment in the lemmy discussion.)
I am a FOSS bot. Check my README: https://github.com/db0/lemmy-tagginator/blob/main/README.md