Re: [PATCH v4 09/10] iommu: Make iommu_queue_iopf() more generic

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Jean,

On 8/30/2023 6:19 PM, Jean-Philippe Brucker wrote:
> On Wed, Aug 30, 2023 at 04:32:47PM +0530, Vasant Hegde wrote:
>> Tian, Baolu,
>>
>> On 8/30/2023 1:13 PM, Tian, Kevin wrote:
>>>> From: Baolu Lu <baolu.lu@xxxxxxxxxxxxxxx>
>>>> Sent: Saturday, August 26, 2023 4:01 PM
>>>>
>>>> On 8/25/23 4:17 PM, Tian, Kevin wrote:
>>>>>> +
>>>>>>   /**
>>>>>>    * iopf_queue_flush_dev - Ensure that all queued faults have been
>>>>>> processed
>>>>>>    * @dev: the endpoint whose faults need to be flushed.
>>>>> Presumably we also need a flush callback per domain given now
>>>>> the use of workqueue is optional then flush_workqueue() might
>>>>> not be sufficient.
>>>>>
>>>>
>>>> The iopf_queue_flush_dev() function flushes all pending faults from the
>>>> IOMMU queue for a specific device. It has no means to flush fault queues
>>>> out of iommu core.
>>>>
>>>> The iopf_queue_flush_dev() function is typically called when a domain is
>>>> detaching from a PASID. Hence it's necessary to flush the pending faults
>>>> from top to bottom. For example, iommufd should flush pending faults in
>>>> its fault queues after detaching the domain from the pasid.
>>>>
>>>
>>> Is there an ordering problem? The last step of intel_svm_drain_prq()
>>> in the detaching path issues a set of descriptors to drain page requests
>>> and responses in hardware. It cannot complete if not all software queues
>>> are drained and it's counter-intuitive to drain a software queue after 
>>> the hardware draining has already been completed.
>>>
>>> btw just flushing requests is probably insufficient in iommufd case since
>>> the responses are received asynchronously. It requires an interface to
>>> drain both requests and responses (presumably with timeouts in case
>>> of a malicious guest which never responds) in the detach path.
>>>
>>> it's not a problem for sva as responses are synchrounsly delivered after
>>> handling mm fault. So fine to not touch it in this series but certainly
>>> this area needs more work when moving to support iommufd. 😊
>>>
>>> btw why is iopf_queue_flush_dev() called only in intel-iommu driver?
>>> Isn't it a common requirement for all sva-capable drivers?
> 
> It's not needed by the SMMUv3 driver because it doesn't implement PRI yet,
> only the Arm-specific stall fault model where DMA transactions are held in
> the SMMU while waiting for the OS to handle IOPFs. Since a device driver
> must complete all DMA transactions before calling unbind(), with the stall
> model there are no pending IOPFs to flush on unbind(). PRI support with
> Stop Markers would add a call to iopf_queue_flush_dev() after flushing the
> SMMU PRI queue [2].
> 

Thanks for the explanation.

> Moving the flush to the core shouldn't be a problem, as long as the driver
> gets a chance to flush the hardware queue first.

I am fine with keeping it as is. I can call iopf_queue_flush_dev() from AMD driver.

-Vasant


> 
> Thanks,
> Jean
> 
> [2] https://jpbrucker.net/git/linux/commit/?h=sva/2020-12-14&id=bba76fb4ec631bec96f98f14a6cd13b2df81e5ce
> 
>>
>> I had same question when we did SVA implementation for AMD IOMMU [1]. Currently
>> we call queue_flush from remove_dev_pasid() path. Since PASID can be enabled
>> without ATS/PRI, I thought its individual drivers responsibility.
>> But looking this series, does it make sense to handle queue_flush in core layer?
>>
>> [1]
>> https://lore.kernel.org/linux-iommu/20230823140415.729050-1-vasant.hegde@xxxxxxx/T/#t
>>
>> -Vasant
>>
>>




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux