-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GPU Passthrough #22
Comments
To pass-through your Intel iGPU, add the following lines to your compose file: environment:
GPU: "Y"
devices:
- /dev/dri However, this feature is mainly so that you can transcode video files in Linux using hardware acceleration. I do not know if it also works as a Display Adaptor (for accelerating the desktop) and I never tried it in Windows, so it might not work at all. But if you want to test it, go ahead! I don't know what you mean by Virtual Machine Manager? If you mean the package that Synology provides on NAS, then that seems normal, as it only connects to its own VM's, not any random QEMU VM. See also my other project: https://github.com/vdsm/virtual-dsm for running Synology DSM in Docker. If you mean something else, then please provide some more details about what you were trying to do. |
I meant this project https://virt-manager.org/ though it requires ssh on the client (in this case the windows-in-docker container) to connect to it. Also, i dont have an intel gpu, just an amd igpu and an nvidia dgpu. I thought maybe it would be possible to passthrough the nvidia one so it could be used as an output device |
You can SSH to the container, if you do something like: environment:
HOST_PORTS: "22"
ports:
- 22:22 But I have no experience with Virt-Manager, so I cannot say if that works. I assume not, because it seems related to As for the nVidia GPU, I am sure it is possible to do it. But its kinda complicated because it needs two pass through both Docker and QEMU. Unfortunately I don't have any nVidia or AMD gpu myself, so someone else has to submit the code, because I have no way to test it. |
Hm, i already tried forwording port 22, but i couldnt connect to the container with ssh. It seems to me like openssh-server is missing, but even then it doesnt work somehow. If you could give me some information on how i would get started to passthrough an nvidia gpu, i could try it myself and provide the code afterwards |
The container forwards all the traffic to the VM, and Windows does not respond on port 22. So this As for getting the passthrough to work: you can add additional QEMU parameters via the |
Any advice on passing through an nvidia gpu? |
i can test tommorow but it should be for Unraid in extra params: --runtime=nvidia new VariableName: Nvidia GPU UUID and Key: NVIDIA_VISIBLE_DEVICES and Value: all <--- or if you have more than one NVidia GPU the GPU UUID |
not working with Unraid. extra Param: --device='/dev/dri' <--- does not work AND new device: /dev/dri/ |
I think this only passes the nvidia gpu capabilities to the container, not to the vm, but i might be wrong |
That's correct. Tried this now and my VM does not see the GPU. |
Almost there (nor not). Using unraid with a Intel iGPU (13g Intel UDH770), modifying the template to include 2 new entries (device and variable). Apparently, everything is detected and drivers installed. Inside the VM nothing is detected or showing indications Can you help with the next step (if possible) ? |
It totally depends on what you are trying to achieve. In the screenshot I see the GPU adapter in Windows device manager, so that means it works and everything is okay. Its a virtual graphic cards that can be used for hardware acceleration when encoding video formats for example, or running certain calculations. All these tasks will be performed by your Intel GPU through this virtual device. But if your goal is to use the HDMI display out to connect a monitor, I do not think this graphics card is fit for that purpose. So it all depends on what you are trying to do? |
i gues.. he is right.. iam in steam link right now... downloading a small game and test it out.. Intel IGPU 13700k. I test it with Pacify it should run fine. i stream from my Unraid server to my Iphone 15pro max over wifi 5 so No the Game needs a DirectX Device which is not installed :D |
@domrockt Its possible that this "GPU DOD" device has no DirectX. QEMU supports many different video devices, and this device is for transcoding videos, so obviously we need to tell QEMU to create a different device that is more suitable for gaming. I will see if I can fix it, but its a bit low on my priority-list so if somebody else has the time to figure out how to do it in QEMU it would be appreciated. |
i gues this one https://github.com/virtio-win/kvm-guest-drivers-windows but ths is my wits end :D for now. |
It's not possible to use any type of acceleration (ex: youtube, decode/encode files, ... ) inside the VM, and zero activity detected by host side. Apparently, the VM is working the same way... and nothing is different with or without iGPU passthrough. |
I know for certain that it works in Linux guests, as I use the same code in my other project ( https://github.com/vdsm/virtual-dsm ) where the GPU is used for accelerating facial recognition in photos, etc. I never tried it in a Windows guest, so its possible that it does not work there (or needs special drivers to be installed). I created this container only one day ago, and even much less advanced features (like an audio device for sound) are not even implemented yet. So its better to focus first on getting the basics finished, and the very complicated/advanced stuff like GPU acceleration will be one of the last things on the list, sorry. |
I also use passthrough in several other containers (plex, jellyfin, frigate,...). Being able to achieve in this container can be big/great thing (applications design to only work with windows) . Sharing the iGPU with containers and not dedicating to a single VM can be a very economical, versatile and power efficient approach. Looking forward to hearing from you in the future on this matter. Despite this "issue", thanks for your hard work. 👍 |
Is definitely reasonable. Appreciate your hard work |
I did some investigation and it seems its possible to have DirectX in a Windows Guest by using the The other option is PCI passthrough, but it is less nice in the sense that it requires exclusive access to the device, so you cannot use the same device for multiple containers. And its very complicated to support, because depending on the generation of the iGPU you will need to use different methods, for example SR-IOV for very recent Intel XE graphics, iGVT-g for others, etc. It will be impossible to add a short and universal set of instructions that will work for most graphics card to the FAQ. |
I have to say that i was intrigued by the idea to run this container as a windows game streaming server and pass my Nvidia GPU through to the VM.... but looking through this and the qemus/qemu-docker project i understand that it would be a huge project :) I'll probably find some other use case for this tho :D |
So using this project as game streaming server is not possible? Is there any other alternative game streaming that is hosted in docker? |
Not that I know of, my plan was to have a windows VM on my server that could run all the games my linux PC can't, but Nvidia and docker is not a fun project :-/ I don't know how to pass an Nvidia card through to a container without an extra nvidia-toolkit container as a layer in-between. At least that's my guess :-) I could look in to it a bit more, if the container can access the GPU qemu should be able to use it with some modification I guess |
@kieeps - Maybe I can be of help... :) |
@tarunx - Maybe try out using KasmVNC - they have a SteamContainer so it might be possible... |
passing a GPU through to Qemu is quite the process, in a container just adds an extra layer of issues. Typically you have to enable vfio_pci and iommu for your cpu type in the kernel modules. Then you use options to pass it through to Qemu. You can remotely connect to a running qemu instance (virt-manager is typically what people use) then add in Docker/Podman and its a whole other thing. I bet someone has done it, but it doesn't sound easy necessarily. What I did was install Nix on a remote machine and followed this guide https://alexbakker.me/post/nixos-pci-passthrough-qemu-vfio.html and there are a lot of articles about the options that qemu needs. Im curious to see if someone tries this on top of Docker/Podman |
I'm using that, of course to adapt as per your needs (array):
you have to call this script with either |
Hello, I am following this issue since it was created as a silent reader and want to thank everyone that has provided so much information regarding this topic. I`d like to throw another layer into the pit regarding passing gpus to windows running inside docker via this project. I would be highly interested in any information regarding not doing a full gpu passthrough but splitting a gpu into vGPUs using the https://github.com/mbilker/vgpu_unlock-rs project (a detailed tutorial how to do this with a Proxmox Server can be found here https://gitlab.com/polloloco/vgpu-proxmox) and then passing a vGPU to a specific windows docker container. Maybe someone has already tried this. It works like charm on proxmox with Windows VMs using for example enterprise GPUs like the Tesla M40 or Tesla P4. Thanks in advance |
Hi, new to this thread and having a go at the config to get a NVIDIA card passed through to a docker image (dockur/windows) and have it show up in the nested VM. I have the card showing up in nvidia-smi in the docker container and am about to do the passthrough from there to the Windows11 VM. I did this by installing nvidia container tools on the host, then passing through the GPU using portainer and/or command line switches in the docker run command ( i dont use compose ) then installint the nvidia drivers and the nvidia-container toolkit in the docker container. I just wanted to ask, as my server is headless, do I really need to add in vfio-pci and/or looking-glass on the docker image ? from the perspective of the docker image, it is the only thing using the card... so cant I just forward the pci device ? There are other docker images using it for other purposes, but the windows image will be the only one using it for 'display' |
Hi @kroese, |
Would it be possible to create a video teaching how to do "GPU Passthrough"? |
Do I need to have two video cards? Thanks! |
Hello All, I'm not sure if anyone is curious how to passthrough a GPU to the VM directly on an UnRaid system still but, if you are i have a quick hit guide listed below. NOTES: This is an UnRaid Setup w/ NVIDIA | I have 2 GPUs on bare metal (1080 & 3060) & DEDICATING one (3060) to the Windows inside docker | Mileage may vary. On UnRaid Terminal as root:
Output:lspci -nnk | grep -i -A 3 'VGA' 03:00.1 Audio device [0403]: NVIDIA Corporation GA106 High Definition Audio Controller [10de:228e] (rev a1) 81:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP104 [GeForce GTX 1080] [10de:1b80] (rev a1) Make Note of the Device you want to add to the VM, in my case its:
UnRaid Docker Setup:How to Add the 3 Device Types & 1 VariableThe variable as you might expect is well......variable, change the code below based on your system output above in my case its built like this: If i wanted to use the 1080 itd be built like this: Save the docker, it will setup successful but NOT start successfully - this is expected!you should see an error in the logs stating that it cant access the vfio device ect. On UnRaid Terminal as root:
NOTE: Kernel is nvidia
Time to unbind NVIDIA & bind to VFIO-PCI:This will be the based on the above output, my GPU Video device ID is 03:00:0 & Vendor ID is 10de:2503 This will be the based on the above output, my GPU Audio device ID is 03:00:1 & Vendor ID is 10de:228e NOTE: Kernel is VFIO-PCI
Before:After:Start the docker container & see if it boots:Let it run through the install, once you hit the desktop - type ENDING NOTES:The changes made in the unbind NVIDIA & bind to VFIO-PCI section stays in effect until reboot oh the host (UnRaid) after reboot you will need to redo that section. You can however run a script on start up or on-demand to help automate the process. I can add that onto this if enough people ask for it. Hope this helps & I didn't miss anything :) ALSO HUGE THANK YOU FOR THIS PROJECT ITS EXACTLY WHAT I NEEDED!!!!! |
I assume that you were running the docker container from a linux host machine not windows host machine, right? :) |
to those interested in this ive written a script that automatically does this
to those interested in this I've written a script that automatically binds and unbinds #845 its still a work in progress so testers will be helpful, the current version needs to be run in user scripts {with modifications} as I need to find a way to run the script pre start and post stop of the container you will still need to do the variables, except the arguments once i have a gpu for my server i can test further |
environment:
|
Having an issue with my headless server running this docker container with an intel hd 530 gpu.
However, it restarts with an error. This is my docker compose file:
My 530 IGP is the only one in IOMMU group 0 too. My grub.
dmesg shows IOMMU enabled and working with devices set to specified groups. And finally, some information
Modules loaded :
|
It's necessary to bind kernel on VFIO ? |
Spent like 28 hr no sleep trying to figure out how to make this work reading this thread & searching google. And seems like for every vender it defer quite a bit, or even for DGPU and IGPU. Shouldn't we create new issue thread per device group or revive like #222 rather than pointing all into #22? Things needed to make each vendors' work seems drastically different. And pointing every GPU issue to this thread is just making things more complicated, as now github start hiding messages so search doesn't even work until we expand all hidden messages. |
So here's some update for AMD IGPU: I was writing sorta noob's guide on how to make passthru work for physical display output scenario for gaming on it, for last few days now (Incomplete tutorial here).
While I was in company I received email from ASRock Tech support - and guess what, they made me ACS/AER enabled bios!
So now without risky ACS override patch we now know how to do it safely:
which bring AMD IGPU's messed up IOMMU from this:
To this:
After relevant settings like env & device setup etc I discovered that it's just seemingly impossible to passthru AMD's vega igpu unlike intel's igpus.
But funny thing is, there's rare cases where it actually works. However I don't understand configuration up there, I need more study on this.. |
@pmanaseri Thank you very much for your proposal, I also want to know, Is there a risk of not being able to restore the connection back to the host when binding the Nvidia graphics card to VFIO PCL? Can the host still use this device when binding? |
@Mick4994 Your image from your last post are private. Can you make them available? |
Hey, would like to know if this container is capable of passing through a gpu to the vm inside the container. I have looked into the upstream docker container qemus/qemu-docker, which seems to have some logic to work with gpu passthrough, though some documentation for this here would be great, if it is possible.
I also tried to connect to the container using virtual machine manager, but unfortunately i wasnt able to connect to it. Any idea why?
The text was updated successfully, but these errors were encountered: