Re: [RFC 00/18] vfio: Adopt iommufd

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 2022/5/10 下午2:51, Eric Auger wrote:
Hi Hi, Zhangfei,

On 5/10/22 05:17, Yi Liu wrote:
Hi Zhangfei,

On 2022/5/9 22:24, Zhangfei Gao wrote:
Hi, Alex

On 2022/4/27 上午12:35, Alex Williamson wrote:
On Tue, 26 Apr 2022 12:43:35 +0000
Shameerali Kolothum Thodi <shameerali.kolothum.thodi@xxxxxxxxxx> wrote:

-----Original Message-----
From: Eric Auger [mailto:eric.auger@xxxxxxxxxx]
Sent: 26 April 2022 12:45
To: Shameerali Kolothum Thodi
<shameerali.kolothum.thodi@xxxxxxxxxx>; Yi
Liu <yi.l.liu@xxxxxxxxx>; alex.williamson@xxxxxxxxxx;
cohuck@xxxxxxxxxx;
qemu-devel@xxxxxxxxxx
Cc: david@xxxxxxxxxxxxxxxxxxxxx; thuth@xxxxxxxxxx;
farman@xxxxxxxxxxxxx;
mjrosato@xxxxxxxxxxxxx; akrowiak@xxxxxxxxxxxxx; pasic@xxxxxxxxxxxxx;
jjherne@xxxxxxxxxxxxx; jasowang@xxxxxxxxxx; kvm@xxxxxxxxxxxxxxx;
jgg@xxxxxxxxxx; nicolinc@xxxxxxxxxx; eric.auger.pro@xxxxxxxxx;
kevin.tian@xxxxxxxxx; chao.p.peng@xxxxxxxxx; yi.y.sun@xxxxxxxxx;
peterx@xxxxxxxxxx; Zhangfei Gao <zhangfei.gao@xxxxxxxxxx>
Subject: Re: [RFC 00/18] vfio: Adopt iommufd
[...]
https://lore.kernel.org/kvm/0-v1-e79cd8d168e8+6-iommufd_jgg@xxxxxxxxxx

/
[2] https://github.com/luxis1999/iommufd/tree/iommufd-v5.17-rc6
[3]
https://github.com/luxis1999/qemu/tree/qemu-for-5.17-rc6-vm-rfcv1
Hi,

I had a go with the above branches on our ARM64 platform trying to
pass-through
a VF dev, but Qemu reports an error as below,

[    0.444728] hisi_sec2 0000:00:01.0: enabling device (0000 ->
0002)
qemu-system-aarch64-iommufd: IOMMU_IOAS_MAP failed: Bad address
qemu-system-aarch64-iommufd: vfio_container_dma_map(0xaaaafeb40ce0,
0x8000000000, 0x10000, 0xffffb40ef000) = -14 (Bad address)
I think this happens for the dev BAR addr range. I haven't
debugged the
kernel
yet to see where it actually reports that.
Does it prevent your assigned device from working? I have such errors
too but this is a known issue. This is due to the fact P2P DMA is not
supported yet.
Yes, the basic tests all good so far. I am still not very clear how
it works if
the map() fails though. It looks like it fails in,

iommufd_ioas_map()
    iopt_map_user_pages()
     iopt_map_pages()
     ..
       pfn_reader_pin_pages()

So does it mean it just works because the page is resident()?
No, it just means that you're not triggering any accesses that require
peer-to-peer DMA support.  Any sort of test where the device is only
performing DMA to guest RAM, which is by far the standard use case,
will work fine.  This also doesn't affect vCPU access to BAR space.
It's only a failure of the mappings of the BAR space into the IOAS,
which is only used when a device tries to directly target another
device's BAR space via DMA.  Thanks,
I also get this issue when trying adding prereg listenner

+    container->prereg_listener = vfio_memory_prereg_listener;
+    memory_listener_register(&container->prereg_listener,
+                            &address_space_memory);

host kernel log:
iommufd_ioas_map 1 iova=8000000000, iova1=8000000000,
cmd->iova=8000000000, cmd->user_va=9c495000, cmd->length=10000
iopt_alloc_area input area=859a2d00 iova=8000000000
iopt_alloc_area area=859a2d00 iova=8000000000
pin_user_pages_remote rc=-14

qemu log:
vfio_prereg_listener_region_add
iommufd_map iova=0x8000000000
qemu-system-aarch64: IOMMU_IOAS_MAP failed: Bad address
qemu-system-aarch64: vfio_dma_map(0xaaaafb96a930, 0x8000000000,
0x10000, 0xffff9c495000) = -14 (Bad address)
qemu-system-aarch64: (null)
double free or corruption (fasttop)
Aborted (core dumped)

With hack of ignoring address 0x8000000000 in map and unmap, kernel
can boot.
do you know if the iova 0x8000000000 guest RAM or MMIO? Currently,
iommufd kernel part doesn't support mapping device BAR MMIO. This is a
known gap.
In qemu arm virt machine this indeed matches the PCI MMIO region.

Thanks Yi and Eric,
Then will wait for the updated iommufd kernel for the PCI MMIO region.

Another question,
How to get the iommu_domain in the ioctl.

qemu can get container->ioas_id.

kernel can get ioas via the ioas_id.
But how to get the domain?
Currently I am hacking with ioas->iopt.next_domain_id, which is increasing.
domain = xa_load(&ioas->iopt.domains, ioas->iopt.next_domain_id-1);

Any idea?

Thanks



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux