RE: [PATCH v3 4/8] iommufd: Add iommufd fault object

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> -----Original Message-----
> From: Lu Baolu <baolu.lu@xxxxxxxxxxxxxxx>
> Sent: Monday, January 22, 2024 7:39 AM
> To: Jason Gunthorpe <jgg@xxxxxxxx>; Kevin Tian <kevin.tian@xxxxxxxxx>; Joerg
> Roedel <joro@xxxxxxxxxx>; Will Deacon <will@xxxxxxxxxx>; Robin Murphy
> <robin.murphy@xxxxxxx>; Jean-Philippe Brucker <jean-philippe@xxxxxxxxxx>;
> Nicolin Chen <nicolinc@xxxxxxxxxx>; Yi Liu <yi.l.liu@xxxxxxxxx>; Jacob Pan
> <jacob.jun.pan@xxxxxxxxxxxxxxx>; Joel Granados <j.granados@xxxxxxxxxxx>
> Cc: iommu@xxxxxxxxxxxxxxx; virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx; linux-
> kernel@xxxxxxxxxxxxxxx; Lu Baolu <baolu.lu@xxxxxxxxxxxxxxx>
> Subject: [PATCH v3 4/8] iommufd: Add iommufd fault object
> 
> An iommufd fault object provides an interface for delivering I/O page
> faults to user space. These objects are created and destroyed by user
> space, and they can be associated with or dissociated from hardware page
> table objects during page table allocation or destruction.
> 
> User space interacts with the fault object through a file interface. This
> interface offers a straightforward and efficient way for user space to
> handle page faults. It allows user space to read fault messages
> sequentially and respond to them by writing to the same file. The file
> interface supports reading messages in poll mode, so it's recommended that
> user space applications use io_uring to enhance read and write efficiency.
> 
> A fault object can be associated with any iopf-capable iommufd_hw_pgtable
> during the pgtable's allocation. All I/O page faults triggered by devices
> when accessing the I/O addresses of an iommufd_hw_pgtable are routed
> through the fault object to user space. Similarly, user space's responses
> to these page faults are routed back to the iommu device driver through
> the same fault object.
> 
> Signed-off-by: Lu Baolu <baolu.lu@xxxxxxxxxxxxxxx>

[...]

> +static __poll_t iommufd_fault_fops_poll(struct file *filep,
> +					struct poll_table_struct *wait)
> +{
> +	struct iommufd_fault *fault = filep->private_data;
> +	__poll_t pollflags = 0;
> +
> +	poll_wait(filep, &fault->wait_queue, wait);
> +	mutex_lock(&fault->mutex);
> +	if (!list_empty(&fault->deliver))
> +		pollflags = EPOLLIN | EPOLLRDNORM;
> +	mutex_unlock(&fault->mutex);
> +
> +	return pollflags;
> +}
> +
> +static const struct file_operations iommufd_fault_fops = {
> +	.owner		= THIS_MODULE,
> +	.open		= nonseekable_open,
> +	.read		= iommufd_fault_fops_read,
> +	.write		= iommufd_fault_fops_write,
> +	.poll		= iommufd_fault_fops_poll,
> +	.llseek		= no_llseek,
> +};

Hi

I am trying to enable Qemu vSVA support on ARM with this series.
I am using io_uring APIs with the fault fd to handle the page fault
in the Qemu.

Please find the implementation here[1]. This is still a work in progress 
and is based on Nicolin's latest nested Qemu branch.

And I am running into a problem when we have the poll interface added
for the fault fd in kernel.

What I have noticed is that,
-read interface works fine and I can receive struct tiommu_hwpt_pgfault data.
-But once Guest handles the page faults and returns the page response,
 the write to fault fd never reaches the kernel. The sequence is like below,
 
  sqe = io_uring_get_sqe(ring);
  io_uring_prep_write(sqe, hwpt->fault_fd, resp, sizeof(*resp), 0);
  io_uring_sqe_set_data(sqe, resp);
  io_uring_submit(ring);
  ret = io_uring_wait_cqe(ring, &cqe); 
  ....
Please find the function here[2]

The above cqe wait never returns and hardware times out without receiving
page response. My understanding of io_uring default op is that it tries to 
issue an sqe as non-blocking first. But it looks like the above write sequence
ends up in kernel poll_wait() as well.Not sure how we can avoid that for
write.

All works fine if I comment out the poll for the fault_fd from the kernel.
But then of course Qemu ends up repeatedly reading the ring Queue for
any pending page fault.

It might be something I am missing in my understanding of io_uring APIs.
Just thought of checking with you if you have any Qemu implementation
using io_uring APIs to test this.

Also appreciate any pointers in resolving this.

Thanks,
Shameer
[1] https://github.com/hisilicon/qemu/tree/iommufd_vsmmu-02292024-vsva-wip
[2] https://github.com/hisilicon/qemu/blob/2b984fb5c692a03e6f5463d005670d2e2a2c7304/hw/arm/smmuv3.c#L1310






[Index of Archives]     [KVM Development]     [Libvirt Development]     [Libvirt Users]     [CentOS Virtualization]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux