- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
I’m currently getting a lot of timeout errors and delays processing the analysis. What GPU can I add to this? Please advise.
The i7-6700 has an Intel iGPU that will handle heavy transcoding just fine using Quicksync.
It will even do really fast object detection with OpenVINO, with minimal CPU usage. At least in Frigate both of those things work extremely well.
I bought that desktop exactly for that reason. The video recording itself seems to work fine, but the ai model seems to be struggling sometimes, and even when it works, it takes about half a second or more to make a classification. That’s what I want to improve with the gpu. I’m reading up on openvio, and it seems impressive, but only on frigate. Do you have any experience with Frigate vs Blue iris? What are your thoughts?
I’ve never used anything else so I can’t really compare, but frigate works well.
Blue Iris is windows only and really resource heavy, so thats why I’ve never used it for more than a quick test.
I’m glad you posted this because I need similar advice. I want a GPU for Jellyfin transcoding and running Ollama (for a local conversation agent for Home Assistant), splitting access to the single GPU between two VMs in Proxmox.
I would also prefer it to be AMD as a first choice or Intel as a second, because I’m still not a fan of Nvidia for their hostile attitude towards Linux and for proprietary CUDA.
(The sad thing is that I probably could have accomplished the transcoding part with just integrated graphics, but my AMD CPU isn’t an APU.)
The problem with AMD graphics cards is that the performance that CUDA, xformers and pytorch provide for nVidia cards blows anything AMD has away by a significantly high order of magnitude.
I have no idea why AMD gpus are so trash when it comes near anything involving generative AI/LLMs, DLSS, Jellyfin transcoding, or even raytracing; i would recommend waiting until their upcoming new GPU announcements.
You can try a Coral TPU with CodeProjectAI. I used it for a bit but I have the USB version and it has heating/disconnect issues at times.
I don’t like using CPU for AI so plan to offload the AI to an unRAID server with a nVidia 2070 super to combine Plex decoding and AI tasks to that box.