Re: [RFC v2 1/3] vfio: VFIO_IOMMU_CACHE_INVALIDATE

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Yi,

On 11/6/19 2:31 AM, Liu, Yi L wrote:
>> From: Alex Williamson [mailto:alex.williamson@xxxxxxxxxx]
>> Sent: Wednesday, November 6, 2019 6:42 AM
>> To: Liu, Yi L <yi.l.liu@xxxxxxxxx>
>> Subject: Re: [RFC v2 1/3] vfio: VFIO_IOMMU_CACHE_INVALIDATE
>>
>> On Fri, 25 Oct 2019 11:20:40 +0000
>> "Liu, Yi L" <yi.l.liu@xxxxxxxxx> wrote:
>>
>>> Hi Kevin,
>>>
>>>> From: Tian, Kevin
>>>> Sent: Friday, October 25, 2019 5:14 PM
>>>> To: Liu, Yi L <yi.l.liu@xxxxxxxxx>; alex.williamson@xxxxxxxxxx;
>>>> Subject: RE: [RFC v2 1/3] vfio: VFIO_IOMMU_CACHE_INVALIDATE
>>>>
>>>>> From: Liu, Yi L
>>>>> Sent: Thursday, October 24, 2019 8:26 PM
>>>>>
>>>>> From: Liu Yi L <yi.l.liu@xxxxxxxxxxxxxxx>
>>>>>
>>>>> When the guest "owns" the stage 1 translation structures,  the
>>>>> host IOMMU driver has no knowledge of caching structure updates
>>>>> unless the guest invalidation requests are trapped and passed down to the host.
>>>>>
>>>>> This patch adds the VFIO_IOMMU_CACHE_INVALIDATE ioctl with aims at
>>>>> propagating guest stage1 IOMMU cache invalidations to the host.
>>>>>
>>>>> Cc: Kevin Tian <kevin.tian@xxxxxxxxx>
>>>>> Signed-off-by: Liu Yi L <yi.l.liu@xxxxxxxxxxxxxxx>
>>>>> Signed-off-by: Eric Auger <eric.auger@xxxxxxxxxx>
>>>>> Signed-off-by: Jacob Pan <jacob.jun.pan@xxxxxxxxxxxxxxx>
>>>>> ---
>>>>>  drivers/vfio/vfio_iommu_type1.c | 55
>>>>> +++++++++++++++++++++++++++++++++++++++++
>>>>>  include/uapi/linux/vfio.h       | 13 ++++++++++
>>>>>  2 files changed, 68 insertions(+)
>>>>>
>>>>> diff --git a/drivers/vfio/vfio_iommu_type1.c
>>>>> b/drivers/vfio/vfio_iommu_type1.c index 96fddc1d..cd8d3a5 100644
>>>>> --- a/drivers/vfio/vfio_iommu_type1.c
>>>>> +++ b/drivers/vfio/vfio_iommu_type1.c
>>>>> @@ -124,6 +124,34 @@ struct vfio_regions {
>>>>>  #define IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu)	\
>>>>>  					(!list_empty(&iommu->domain_list))
>>>>>
>>>>> +struct domain_capsule {
>>>>> +	struct iommu_domain *domain;
>>>>> +	void *data;
>>>>> +};
>>>>> +
>>>>> +/* iommu->lock must be held */
>>>>> +static int
>>>>> +vfio_iommu_lookup_dev(struct vfio_iommu *iommu,
>>>>> +		      int (*fn)(struct device *dev, void *data),
>>>>> +		      void *data)
>>>>
>>>> 'lookup' usually means find a device and then return. But the real
>>>> purpose here is to loop all the devices within this container and
>>>> then do something. Does it make more sense to be vfio_iommu_for_each_dev?
>>
>> +1
>>
>>> yep, I can replace it.
>>>
>>>>
>>>>> +{
>>>>> +	struct domain_capsule dc = {.data = data};
>>>>> +	struct vfio_domain *d;
>>> [...]
>>>> 2315,6 +2352,24 @@
>>>>> static long vfio_iommu_type1_ioctl(void *iommu_data,
>>>>>
>>>>>  		return copy_to_user((void __user *)arg, &unmap, minsz) ?
>>>>>  			-EFAULT : 0;
>>>>> +	} else if (cmd == VFIO_IOMMU_CACHE_INVALIDATE) {
>>>>> +		struct vfio_iommu_type1_cache_invalidate ustruct;
>>>>
>>>> it's weird to call a variable as struct.
>>>
>>> Will fix it.
>>>
>>>>> +		int ret;
>>>>> +
>>>>> +		minsz = offsetofend(struct
>>>>> vfio_iommu_type1_cache_invalidate,
>>>>> +				    info);
>>>>> +
>>>>> +		if (copy_from_user(&ustruct, (void __user *)arg, minsz))
>>>>> +			return -EFAULT;
>>>>> +
>>>>> +		if (ustruct.argsz < minsz || ustruct.flags)
>>>>> +			return -EINVAL;
>>>>> +
>>>>> +		mutex_lock(&iommu->lock);
>>>>> +		ret = vfio_iommu_lookup_dev(iommu, vfio_cache_inv_fn,
>>>>> +					    &ustruct);
>>>>> +		mutex_unlock(&iommu->lock);
>>>>> +		return ret;
>>>>>  	}
>>>>>
>>>>>  	return -ENOTTY;
>>>>> diff --git a/include/uapi/linux/vfio.h b/include/uapi/linux/vfio.h
>>>>> index 9e843a1..ccf60a2 100644
>>>>> --- a/include/uapi/linux/vfio.h
>>>>> +++ b/include/uapi/linux/vfio.h
>>>>> @@ -794,6 +794,19 @@ struct vfio_iommu_type1_dma_unmap {
>>>>>  #define VFIO_IOMMU_ENABLE	_IO(VFIO_TYPE, VFIO_BASE + 15)
>>>>>  #define VFIO_IOMMU_DISABLE	_IO(VFIO_TYPE, VFIO_BASE + 16)
>>>>>
>>>>> +/**
>>>>> + * VFIO_IOMMU_CACHE_INVALIDATE - _IOWR(VFIO_TYPE, VFIO_BASE +
>>>>> 24,
>>
>> What's going on with these ioctl numbers?  AFAICT[1] we've used up through
>> VFIO_BASE + 21, this jumps to 24, the next patch skips to 27, then the last patch fills
>> in 28 & 29.  Thanks,
> 
> Hi Alex,
> 
> I rebase my patch to Eric's nested stage translation patches. His base also introduced
> IOCTLs. I should have made it better. I'll try to sync with Eric to serialize the IOCTLs.
> 
> [PATCH v6 00/22] SMMUv3 Nested Stage Setup by Eric Auger
> https://lkml.org/lkml/2019/3/17/124

Feel free to choose your IOCTL numbers without taking care of my series.
I will adapt to yours if my work gets unblocked at some point.

Thanks

Eric
> 
> Thanks,
> Yi Liu
> 
>> Alex
>>
>> [1] git grep -h VFIO_BASE | grep "VFIO_BASE +" | grep -e ^#define | \
>>     awk '{print $NF}' | tr -d ')' | sort -u -n
> 





[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux