Re: [PATCH v4 5/7] iommu/dma: Allow a single FQ in addition to per-CPU FQs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 2023-01-19 at 16:55 +0100, Niklas Schnelle wrote:
> On Wed, 2023-01-04 at 13:05 +0100, Niklas Schnelle wrote:
> > In some virtualized environments, including s390 paged memory guests,
> > IOTLB flushes are used to update IOMMU shadow tables. Due to this, they
> > are much more expensive than in typical bare metal environments or
> > non-paged s390 guests. In addition they may parallelize more poorly in
> > virtualized environments. This changes the trade off for flushing IOVAs
> > such that minimizing the number of IOTLB flushes trumps any benefit of
> > cheaper queuing operations or increased paralellism.
> > 
> > In this scenario per-CPU flush queues pose several problems. Firstly
> > per-CPU memory is often quite limited prohibiting larger queues.
> > Secondly collecting IOVAs per-CPU but flushing via a global timeout
> > reduces the number of IOVAs flushed for each timeout especially on s390
> > where PCI interrupts may not be bound to a specific CPU.
> > 
> > Thus let's introduce a single flush queue mode IOMMU_DOMAIN_DMA_SQ that
> > reuses the same queue logic but only allocates a single global queue
> > allowing larger batches of IOVAs to be freed at once and with larger
> > timeouts. This is to allow the common IOVA flushing code to more closely
> > resemble the global flush behavior used on s390's previous internal DMA
> > API implementation.
> > 
> > As we now support two different variants of flush queues rename the
> > existing __IOMMU_DOMAIN_DMA_FQ to __IOMMU_DOMAIN_DMA_LAZY to indicate
> > the general case of having a flush queue and introduce separate
> > __IOMMU_DOMAIN_DMA_PERCPU_Q and __IOMMU_DOMAIN_DMA_SINGLE_Q bits to
> > indicate the two queue variants.
> > 
> > Link: https://lore.kernel.org/linux-iommu/3e402947-61f9-b7e8-1414-fde006257b6f@xxxxxxx/
> > Signed-off-by: Niklas Schnelle <schnelle@xxxxxxxxxxxxx>
> > ---
> > v2 -> v3:
> > - Rename __IOMMU_DOMAIN_DMA_FQ to __IOMMU_DOMAIN_DMA_LAZY to make it more clear
> >   that this bit indicates flush queue use independent of the exact queuing
> >   strategy
> 
---8<---
> 
> >  
> > -		for (i = 0; i < IOVA_FQ_SIZE; i++)
> > -			INIT_LIST_HEAD(&fq->entries[i].freelist);
> > +	if (rc) {
> > +		pr_warn("iova flush queue initialization failed\n");
> > +		return rc;
> >  	}
> > 
> ---8<--- 
> > 
> >  	mutex_unlock(&group->mutex);
> > @@ -2896,10 +2900,10 @@ static int iommu_change_dev_def_domain(struct iommu_group *group,
> >  	}
> >  
> >  	/* We can bring up a flush queue without tearing down the domain */
> > -	if (type == IOMMU_DOMAIN_DMA_FQ && prev_dom->type == IOMMU_DOMAIN_DMA) {
> > +	if (!!(type & __IOMMU_DOMAIN_DMA_LAZY) && prev_dom->type == IOMMU_DOMAIN_DMA) {
> >  		ret = iommu_dma_init_fq(prev_dom);
> >  		if (!ret)
> > -			prev_dom->type = IOMMU_DOMAIN_DMA_FQ;
> > +			prev_dom->type = type;
> 
> Here domain->type is set only after calling iommu_dma_init_fq().

Actually I think even in the current code the above and the similar
code in iommu.c isn't ideal. When going from DMA to DMA-FQ with a bound
driver the flush queue is used from the moment that WRITE_ONCE(cookie-
>fq_domain, domain) executes in iommu_dma_init_fq() so there is a
window where the flush queue is already used but domain->type is still
DMA. By adding a type parameter to iommu_dma_init_fq() we can set
domain->type before the WRITE_ONCE() and thus close this window and it
even makes the callsites of iommu_dma_init_fq() simpler.

> 
> >  		goto out;
> >  	}
> >  
> 





[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Kernel Development]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite Info]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Linux Media]     [Device Mapper]

  Powered by Linux