On Fri, Apr 24, 2020 at 9:30 PM Mauricio Tavares <raubvogel@xxxxxxxxx> wrote: > > On Fri, Apr 24, 2020 at 4:35 PM Peter Crowther > <peter.crowther@xxxxxxxxxxxx> wrote: > > > > On Fri, 24 Apr 2020 at 21:10, Mauricio Tavares <raubvogel@xxxxxxxxx> wrote: > >> > >> Let's say I have libvirt > >> > >> [root@vmhost2 ~]# virsh version > >> [...] > >> > >> Running hypervisor: QEMU 2.12.0 > >> [root@vmhost2 ~]# > >> [...] > > > > When I try to start the guest I get the following error message: > >> > >> > >> [root@vmhost2 ~]# virsh start testfedora > >> error: Failed to start domain testfedora > >> error: internal error: qemu unexpectedly closed the monitor: > >> 2020-04-24T20:01:35.341020Z qemu-kvm: -device > >> vfio-pci,host=01:00.0,id=hostdev0,bus=pci.8,addr=0x0: vfio error: > >> 0000:01:00.0: failed to setup INTx fd: Operation not permitted > >> > >> [root@vmhost2 ~]# > >> > >> Why is it telling me that is not permitted? > >> > > The guest will be running as qemu on the host. Does qemu have appropriate permissions in the host, and does that include in any hardening like SElinux that you're running? > > > > I tried with selinux in permissive mode to see if it made a > difference. Not much. > > [root@vmhost2 ~]# getenforce > Permissive > [root@vmhost2 ~]# virsh start testfedora > error: Failed to start domain testfedora > error: internal error: qemu unexpectedly closed the monitor: > 2020-04-25T00:43:36.621246Z qemu-kvm: -device > vfio-pci,host=01:00.0,id=hostdev0,bus=pci.8,addr=0x0: vfio error: > 0000:01:00.0: failed to setup INTx fd: Operation not permitted > > [root@vmhost2 ~]# > > For the fun of it, I swapped that card with another one (same speed, > number of ports, diff brand), so it is on th every sam epci slot: > > [root@vmhost2 ~]# virsh nodedev-dumpxml pci_0000_01_00_0 > <device> > <name>pci_0000_01_00_0</name> > <path>/sys/devices/pci0000:00/0000:00:01.0/0000:01:00.0</path> > <parent>pci_0000_00_01_0</parent> > <driver> > <name>vfio-pci</name> > </driver> > <capability type='pci'> > <domain>0</domain> > <bus>1</bus> > <slot>0</slot> > <function>0</function> > <product id='0x4000' /> > <vendor id='0x19ee'>Netronome Systems, Inc.</vendor> > <capability type='virt_functions' maxCount='64'/> > <iommuGroup number='1'> > <address domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> > <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> > </iommuGroup> > <pci-express> > <link validity='cap' port='0' speed='8' width='8'/> > <link validity='sta' speed='2.5' width='8'/> > </pci-express> > </capability> > </device> > > > [root@vmhost2 ~]# > > And it starts without an issue: > > [root@vmhost2 ~]# virsh start testfedora > Domain testfedora started > > [root@vmhost2 ~]# > > Inside the guest: > > [root@testfedora ~]# dmesg |grep -i netronome > [ 12.327316] nfp: NFP PCIe Driver, Copyright (C) 2014-2017 Netronome Systems > [ 12.335036] nfp 0000:07:00.0: Netronome Flow Processor > NFP4000/NFP5000/NFP6000 PCIe Card Probe > [root@testfedora ~]# > > so I do not know what is going on. > My last statement can be translated to "I am doing PCI passthrough, which to me means I am passing the entire card -- whatever it is -- to the guest." Just for the sake of argument, I also created a centos8 guest and had the same outcome. Who is doing the pci passthrough thingie: libvirt or qemu? Just want to see where I should put my efforts on. I have the Also, I decided for the fun of it to create a docker container in the same host and pass the card to it. That seemed to have worked better [root@ce3077ee015c /]# ls /sys/bus/pci/devices/0000\:01\:00.0/ aer_dev_correctable infiniband_srp/ pools aer_dev_fatal infiniband_verbs/ power/ aer_dev_nonfatal iommu/ remove ari_enabled iommu_group/ rescan broken_parity_status irq reset class local_cpulist resource config local_cpus resource0 consistent_dma_mask_bits max_link_speed resource2 current_link_speed max_link_width resource2_wc current_link_width mlx4_port1 revision d3cold_allowed mlx4_port1_mtu rom device mlx4_port2 subsystem/ dma_mask_bits mlx4_port2_mtu subsystem_device driver/ modalias subsystem_vendor driver_override msi_bus uevent enable msi_irqs/ vendor infiniband/ net/ vpd infiniband_mad/ numa_node [root@ce3077ee015c /]# cat /sys/bus/pci/devices/0000\:01\:00.0/device 0x1003 [root@ce3077ee015c /]# Of course the test is to configure card to use or to program it, but I have so far been more successful. Which makes me even more confused. > > Cheers, > > > > - Peter > >