In attempting to isolate vfio-pci problems between two different guest instances, the creation of a second guest (with existing guest shutdown) resulted in:.
Aug 09 12:43:23 grit libvirtd[6716]: internal error: Device 0000:01:00.3 is already in use
Aug 09 12:43:23 grit libvirtd[6716]: internal error: Device 0000:01:00.3 is already in use
Aug 09 12:43:23 grit libvirtd[6716]: Failed to allocate PCI device list: internal error: Device 0000:01:00.3 is already in use
Aug 09 12:43:23 grit libvirtd[6716]: internal error: Device 0000:01:00.3 is already in use
Aug 09 12:43:23 grit libvirtd[6716]: Failed to allocate PCI device list: internal error: Device 0000:01:00.3 is already in use
Compiled against library: libvirt 6.1.0
Using library: libvirt 6.1.0
Using API: QEMU 6.1.0
Running hypervisor: QEMU 4.2.1
Using library: libvirt 6.1.0
Using API: QEMU 6.1.0
Running hypervisor: QEMU 4.2.1
(fc32 default install)
The upstream code seems also to test definitions rather than active uses of the PCI device.
My potentially naive patch to correct this (but not the failing test cases) would be:
diff --git a/src/util/virpci.c b/src/util/virpci.c
index 47c671daa0..a00c5e6f44 100644
--- a/src/util/virpci.c
+++ b/src/util/virpci.c
@@ -1597,7 +1597,7 @@ int
virPCIDeviceListAdd(virPCIDeviceListPtr list,
virPCIDevicePtr dev)
{
- if (virPCIDeviceListFind(list, dev)) {
+ if (virPCIDeviceBusContainsActiveDevices(dev, list)) {
virReportError(VIR_ERR_INTERNAL_ERROR,
_("Device %s is already in use"), dev->name);
return -1;
index 47c671daa0..a00c5e6f44 100644
--- a/src/util/virpci.c
+++ b/src/util/virpci.c
@@ -1597,7 +1597,7 @@ int
virPCIDeviceListAdd(virPCIDeviceListPtr list,
virPCIDevicePtr dev)
{
- if (virPCIDeviceListFind(list, dev)) {
+ if (virPCIDeviceBusContainsActiveDevices(dev, list)) {
virReportError(VIR_ERR_INTERNAL_ERROR,
_("Device %s is already in use"), dev->name);
return -1;
Is this too simplistic or undesirable a feature request/implementation?
I'd be more than grateful if someone carries this through as I'm unsure when I may get time for this.