Hey, I have two RX580s too!
I'm trying to wrap my head around this problem and I have no idea why a Permission denied
would pop up if the script is run at boot, I'm not familiar with the process but I would assume that sh
would run as root.
Have you tried following the rest of the guide, and just ignoring the actual VFIO passthrough step in particular? How I found this out is a long story, but apparently on my system libvirt is able to "yank" the GPU from the host and give it to the vfio-pci driver while the system is running, as long as the libvirt domain has the proper <hostdev>
in it (or, if you're using virt-manager
, you have the PCI 0000:05:00.0
and PCI 0000:05:00.1
thingies set up).
I'm not sure that's supposed to be the case in general, but if that doesn't work for you I don't think your system will explode, if anything you have both GPUs working for the host on boot.
The guide says this:
[...] due to their size and complexity, GPU drivers do not tend to support dynamic rebinding very well, so you cannot just have some GPU you use on the host be transparently passed to a virtual machine without having both drivers conflict with each other. Because of this, it is generally advised to bind those placeholder drivers manually before starting the virtual machine, in order to stop other drivers from attempting to claim it.
The con is that after running the VM, you'd most likely want to reattach the GPU like this:
pcidev0= # Your passed-through GPU, something like 0000:05:00.0
pcidev1= # 0000:05:00.1
pcidev2= # ...
pcidevN= # 0000:05:00.N
# You need to do this for all the devices in the IOMMU group
function rm_pci {
echo 'Removing PCI device '"$1"
echo -n 1 >/sys/bus/pci/devices/"$1"/remove
}
rm_pci "$pcidev0"
rm_pci "$pcidev1"
# ...
rm_pci "$pcidevN"
echo 'Rescanning PCI devices'
echo -n 1 >/sys/bus/pci/rescan
This is because I've found out the hard way that a GPU managed by the vfio-pci
module may or may not spin its fans when it heats up, and if the VFIO GPU is sitting in front of the other one's fans... y'know, heat.
(consider the first paragraph of this comment)
If you manage to give the GPU back to the host via the pseudo scriptlet above, the actual GPU driver will be able to do its job with the fans; the alternatives is rebooting the system, or just assuming that the main GPU doesn't blow 300C° onto the VFIO one while the latter refuses to acknowledge it.
Indeed, although don't just copy-paste the snippet I wrote: I just wrote it on the spot without testing it, you have to tweak it to run the function for the PCI device(s) you have in the IOMMU group of the GPU you want to pass through. In my case it's just 0000:03:00.0 and 0000:03:00.1, perhaps you will also only need two since the GPUs are the same.
You can procrastinate on doing all that, I'm fairly certain nothing will blow up.
Unfortunately my setup is very complex, I hacked together a framework of Zsh scripts that use libvirt hooks - otherwise I would just copy them here.
I didn't mean to say that you must use 0000:05:00.0 specifically, only to follow the rest of the guide without having the script - I'm not sure about identifying the correct device, I did that a long time ago, but I am pretty sure the AL Wiki guide has a way to list GPUs.
The error you get is self-explainatory: along with 0000:05:00.0 (or whatever device), you must also list the ones in the same IOMMU group, which should also be identified along with the PCI device(s) you want to pass through.
EDIT: I skimmed through the guide, apparently it's extremely un-straightforward (gayforward? idk), I'll try to make a director's cut.
As to identifying which of the two is which GPU, your only safe bet is trying to determine which monitor is connected to which PCI device somehow, which I have no idea how to do - I went with trial and error, and hard resets.