On 30 December 2022 19:20:42 GMT, Bjorn Helgaas <helgaas@xxxxxxxxxx> wrote: >Hi Major, > >Thanks for the report! > >On Wed, Dec 21, 2022 at 08:38:46PM +0530, Major Saheb wrote: >> I have an ubuntu guest running on kvm , and I am passing it 10 qemu >> emulated nvme drives >> <iommu model='intel'> >> <driver intremap='on' eim='on'/> >> </iommu> >> <qemu:arg value='pcie-root-port,id=pcie-root-port%d,slot=%d'/> >> <qemu:arg value='nvme,drive=NVME%d,serial=%s_%d,id=NVME%d,bus=pcie-root-port%d'/> >> >> kernel >> Linux node-1 5.15.0-56-generic #62-Ubuntu SMP ----- x86_64 x86_64 >> x86_64 GNU/Linux >> >> kernel command line >> intel_iommu=on >> >> I have attached these drives to vfio-pcie. >> >> when I try to send IO commands to these drives VIA a userspace nvme >> driver using VFIO I get >> [ 1474.752590] DMAR: DRHD: handling fault status reg 2 >> [ 1474.754463] DMAR: [DMA Read NO_PASID] Request device [0b:00.0] >> fault addr 0xffffe000 [fault reason 0x06] PTE Read access is not set >> >> Can someone explain to me what's happening here ? > >I'm not an IOMMU expert, but I think the device (0b:00.0, I assume an >nvme device) did a DMA read to 0xffffe000 (which looks suspiciously >like a null pointer (-8192 off a null pointer)), and the IOMMU had no >mapping for that address. We tend to assign I/O virtual addresses from the top of the 4GiB address space and going downwards, so that could just be the first or second page mapped. >Can you point us to the userspace nvme driver? I'm not a VFIO expert >either, but I assume it uses something like a VFIO_IOMMU_MAP_DMA ioctl >to map buffers and get IOVAs to give to the device? > >Can you collect a dmesg log and output of "sudo lspci -vv" for your >guest? Is this something that worked in the past and broke on a newer >kernel? It looks like you're using a 5.15 kernel; have you tried any >newer kernels? > >Bjorn