I try to follow the docs but somehow it doesn’t work as expected. How does your compose file look and what to choose in the settings?
My current setup: (Not working)
services:
# original source: https://jellyfin.org/docs/general/installation/container/
jellyfin:
image: docker.io/jellyfin/jellyfin:latest
container_name: jellyfin
environment:
- PUID=1000
- PGID=1000
- TZ=Europe/Berlin
- JELLYFIN_PublishedServerUrl=https://my.url
volumes:
- ./config:/config:Z
- ./cache:/cache:Z
- ./media:/media:rw
ports:
- 8096:8096
# no need for https since reverse proxy and no local discovery
restart: always
device:
- /dev/dri/:/dev/dri/:Z
- /dev/dri/renderD128:/dev/dri/renderD128:Z
group-add:
- 105
privileged: true
I do not want a privileged container but I’m experimenting.
- VAAPI is selected
- VA-API device: is set to
/dev/dri/renderD128
- enable hardware decoding for : H264, HEVC, HEVC 10bit and VP9 10 bit
- enable hardware encoding
- allow encoding in HEVC
to get the group I ran getent group render | cut -d: -f3
on the host which returned 105
.
$ podman exec -it jellyfin /usr/lib/jellyfin-ffmpeg/vainfo
Trying display: drm
libva info: VA-API version 1.21.0
libva info: Trying to open /usr/lib/jellyfin-ffmpeg/lib/dri/iHD_drv_video.so
libva info: Found init function __vaDriverInit_1_21
libva info: va_openDriver() returns 0
vainfo: VA-API version: 1.21 (libva 2.21.0)
vainfo: Driver version: Intel iHD driver for Intel(R) Gen Graphics - 24.2.1 (0593864)
vainfo: Supported profile and entrypoints
This command returns VA-API does that mean I can only (or I should) select this method? Or is QSV also possible? What’s better?
podman exec -it jellyfin /usr/lib/jellyfin-ffmpeg/ffmpeg -v verbose -init_hw_device vaapi=va -init_hw_device opencl@va
ffmpeg version 6.0.1-Jellyfin Copyright (c) 2000-2023 the FFmpeg developers
built with gcc 12 (Debian 12.2.0-14)
configuration: --prefix=/usr/lib/jellyfin-ffmpeg --target-os=linux --extra-version=Jellyfin --disable-doc --disable-ffplay --disable-ptx-compression --disable-static --disable-libxcb --disable-sdl2 --disable-xlib --enable-lto --enable-gpl --enable-version3 --enable-shared --enable-gmp --enable-gnutls --enable-chromaprint --enable-opencl --enable-libdrm --enable-libass --enable-libfreetype --enable-libfribidi --enable-libfontconfig --enable-libbluray --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis --enable-libopenmpt --enable-libdav1d --enable-libsvtav1 --enable-libwebp --enable-libvpx --enable-libx264 --enable-libx265 --enable-libzvbi --enable-libzimg --enable-libfdk-aac --arch=amd64 --enable-libshaderc --enable-libplacebo --enable-vulkan --enable-vaapi --enable-amf --enable-libvpl --enable-ffnvcodec --enable-cuda --enable-cuda-llvm --enable-cuvid --enable-nvdec --enable-nvenc
libavutil 58. 2.100 / 58. 2.100
libavcodec 60. 3.100 / 60. 3.100
libavformat 60. 3.100 / 60. 3.100
libavdevice 60. 1.100 / 60. 1.100
libavfilter 9. 3.100 / 9. 3.100
libswscale 7. 1.100 / 7. 1.100
libswresample 4. 10.100 / 4. 10.100
libpostproc 57. 1.100 / 57. 1.100
[AVHWDeviceContext @ 0x55ef07507480] Trying to use DRM render node for device 0.
[AVHWDeviceContext @ 0x55ef07507480] libva: VA-API version 1.21.0
[AVHWDeviceContext @ 0x55ef07507480] libva: Trying to open /usr/lib/jellyfin-ffmpeg/lib/dri/iHD_drv_video.so
[AVHWDeviceContext @ 0x55ef07507480] libva: Found init function __vaDriverInit_1_21
[AVHWDeviceContext @ 0x55ef07507480] libva: va_openDriver() returns 0
[AVHWDeviceContext @ 0x55ef07507480] Initialised VAAPI connection: version 1.21
[AVHWDeviceContext @ 0x55ef07507480] VAAPI driver: Intel iHD driver for Intel(R) Gen Graphics - 24.2.1 (0593864).
[AVHWDeviceContext @ 0x55ef07507480] Driver not found in known nonstandard list, using standard behaviour.
[AVHWDeviceContext @ 0x55ef07538b40] Failed to get number of OpenCL platforms: -1001.
Device creation failed: -19.
Failed to set value 'opencl@va' for option 'init_hw_device': No such device
Error parsing global options: No such device
within the container:
# ls -l /dev/dri
total 0
crw-rw----+ 1 nobody nogroup 226, 1 May 17 13:22 card1
crw-rw-rw-. 1 nobody nogroup 226, 128 May 17 13:22 renderD128
# whoami
root
#
$ getsebool container_use_dri_devices
container_use_dri_devices --> on
$ sudo lshw -c video | grep driver
configuration: driver=i915 latency=0
I can speak at least for rootless podman, I spent some hours on it and different ways I tried all ended in permission issues.
I gave up on trying to do it properly and just set the permissions of the /dev/dri device to 666, so that my podman container can use the gpu for transcoding.
Part of the issue with the container images that I tried is that they create a new user with whatever uid:gid I pass to the container, and so even if my nonroot user is part of the render group, the new user inside the container is not and so it can’t write to the /dev/dri/renderD128 (gpu), and so transcode wouldn’t work.
That’s where I left the troubleshooting at cause it was being a headache
I recommend this: https://www.zigbee2mqtt.io/guide/installation/20_zigbee2mqtt-fails-to-start.html#method-1-give-your-user-permissions-on-every-reboot
with that and also read the tipp after that I was troubleshooting my permission issues.
This should apply to gpu too.
thx for sharing your experience with it
Just add your local user to the render group.
I believe you want to use QSV (VAAPI is there for older processors IIRC).
For running as rootless, you could try adding
group-add keep-groups
(an explanation from Redhat about keep-groups).group-add keep-groups
thx, it does not change anything.,
I have found that VAAPI works better and doesn’t have issues.
VAAPI is the “standard” interface for hardware en-/decoding on Linux. It should work with any GPU using the open source drivers and mesa.
I don’t know how QSV can be installed; AMF, the AMD equivalent, is limited to their proprietary driver.
I think VAAPI is still how the gpu is accessed but you can use the intel libva driver (vaapi) or the intel media driver (QSV) (arch wiki page). If I’m interpreting this all correctly.
Are you absolutely sure that you have the i915 firmware installed and enabled?
If you have gone through these steps of adding the modules: https://jellyfin.org/docs/general/administration/hardware-acceleration/intel/#low-power-encoding
and it doesn’t work, you may have to manually download the git linux firmware library, extract the i915 folder and place it in your firmware folder.
That is how I got jellyfin working on my A380 after pulling my hair out about it.
Please check and post your dmesg starting up.
You should see GUC and HUC enabling.
Have you tried verifying it’s not the group permissions? You could
preliminarilytemporarily set permissions withchmod 666 /dev/dri/render128
My older Skylake processor has a bit worse video quality (occasional artifacts) with QuicSync.
chmod 666 /dev/dri/render128
thx, it does not change anything.
Hmm. I mean now reading your first output in privileged mode properly, I don’t see any errors, or am I missing something… Seems it loaded vaapi sucessfully?!
I should not need to run a container in privileged mode. What’s the container good for then?
Yes, there is no error, yet I was not able to transcode. I had rranscode to x265 and av1 enabled. Ever since disabling av1 it works, yet I have to check again. Meaning, probably the problem was that it tried to encode to av1 and that failed. Yet I still need to run it in privileged mode.
Hmm. I wasn’t trying to recommend privileged or non-privileged mode, just trying to use that to single in on the actual issue.
Alright, if it’s just av1, maybe try to use a tool like
vainfo
to find the supported codecs. I think ffmpeg fails if an unsupported codec is explicitly specified. But take care if Encoding is mentioned. Some hardware has decoding capabilities only.It’s a complicated topic. And it also took me 2 whole evenings to get the permissions and everything right. I’m using systemd-nspawn, so my experience doesn’t directly translate. And it’s not any easier than docker.
For video acceleration I found the Arch wiki somewhat helpful. But it’s lots of info and not specific to Docker. Maybe it helps anyways: https://wiki.archlinux.org/title/Hardware_video_acceleration
Do not use 666 as that is completely open permissions. The proper way is to add your user to the render group.
First add your user to the video and render groups. Additionally, set the pass through to /dev/dri/renderD128:/dev/DRI/renderD128
Another thing, I do not believe that podman compose is well maintained or stable. I would use podman from the command line.
In order for me to get transcoding working I had to have a window manager running. I also had to set Jellyfin to only try to transcode in supported formats.
deleted by creator