RE: [iommufd 2/2] vfio/ap: validate iova during dma_unmap and trigger irq disable

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> From: Liu, Yi L <yi.l.liu@xxxxxxxxx>
> Sent: Wednesday, November 23, 2022 9:49 PM
> 
> From: Matthew Rosato <mjrosato@xxxxxxxxxxxxx>
> 
> vfio_iommufd_bind() creates an access which has an unmap callback, which
> can be called immediately. So dma_unmap() callback should tolerate the
> unmaps that come before the emulated device is opened.
> 
> To achieve above, vfio_ap_mdev_dma_unmap() needs to validate that
> unmap
> request matches with one or more of these stashed values before
> attempting unpins.
> 
> Currently, each mapped iova is stashed in its associated vfio_ap_queue;
> Each stashed iova represents IRQ that was enabled for a queue. Therefore,
> if a match is found, trigger IRQ disable for this queue to ensure that
> underlying firmware will no longer try to use the associated pfn after
> the page is unpinned. IRQ disable will also handle the associated unpin.
> 
> Cc: Tony Krowiak <akrowiak@xxxxxxxxxxxxx>
> Cc: Halil Pasic <pasic@xxxxxxxxxxxxx>
> Cc: Jason Herne <jjherne@xxxxxxxxxxxxx>
> Signed-off-by: Matthew Rosato <mjrosato@xxxxxxxxxxxxx>
> Signed-off-by: Yi Liu <yi.l.liu@xxxxxxxxx>
> ---
>  drivers/s390/crypto/vfio_ap_ops.c | 24 +++++++++++++++++++++++-
>  1 file changed, 23 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/s390/crypto/vfio_ap_ops.c
> b/drivers/s390/crypto/vfio_ap_ops.c
> index bb7776d20792..62bfca2bbe6d 100644
> --- a/drivers/s390/crypto/vfio_ap_ops.c
> +++ b/drivers/s390/crypto/vfio_ap_ops.c
> @@ -1535,13 +1535,35 @@ static int vfio_ap_mdev_set_kvm(struct
> ap_matrix_mdev *matrix_mdev,
>  	return 0;
>  }
> 
> +static void unmap_iova(struct ap_matrix_mdev *matrix_mdev, u64 iova,
> u64 length)
> +{
> +	struct ap_queue_table *qtable = &matrix_mdev->qtable;
> +	u64 iova_pfn_end = (iova + length - 1) >> PAGE_SHIFT;
> +	u64 iova_pfn_start = iova >> PAGE_SHIFT;
> +	struct vfio_ap_queue *q;
> +	int loop_cursor;
> +	u64 pfn;
> +
> +	hash_for_each(qtable->queues, loop_cursor, q, mdev_qnode) {
> +		pfn = q->saved_iova >> PAGE_SHIFT;
> +		if (pfn >= iova_pfn_start && pfn <= iova_pfn_end) {
> +			vfio_ap_irq_disable(q);
> +			break;

does this need a WARN_ON if the length is more than one page?

> +		}
> +	}
> +}
> +
>  static void vfio_ap_mdev_dma_unmap(struct vfio_device *vdev, u64 iova,
>  				   u64 length)
>  {
>  	struct ap_matrix_mdev *matrix_mdev =
>  		container_of(vdev, struct ap_matrix_mdev, vdev);
> 
> -	vfio_unpin_pages(&matrix_mdev->vdev, iova, 1);
> +	mutex_lock(&matrix_dev->mdevs_lock);
> +
> +	unmap_iova(matrix_mdev, iova, length);
> +
> +	mutex_unlock(&matrix_dev->mdevs_lock);
>  }
> 
>  /**
> --
> 2.34.1





[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux