Re: [PATCH v2 7/7] iommu/s390: flush queued IOVAs on RPCIT out of resource indication

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2022-11-29 12:00, Niklas Schnelle wrote:
On Mon, 2022-11-28 at 14:52 +0000, Robin Murphy wrote:
On 2022-11-16 17:16, Niklas Schnelle wrote:
When RPCIT indicates that the underlying hypervisor has run out of
resources it often means that its IOVA space is exhausted and IOVAs need
to be freed before new ones can be created. By triggering a flush of the
IOVA queue we can get the queued IOVAs freed and also get the new
mapping established during the global flush.

Shouldn't iommu_dma_alloc_iova() already see that the IOVA space is
exhausted and fail the DMA API call before even getting as far as
iommu_map(), though? Or is there some less obvious limitation like a
maximum total number of distinct IOVA regions regardless of size?

Well, yes and no. Your thinking is of course correct if the advertised
available IOVA space can be fully utilized without exhausting
hypervisor resources we won't trigger this case. However sadly there
are complications. The most obvious being that in QEMU/KVM the
restriction of the IOVA space to what QEMU can actually have mapped at
once was just added recently[0] prior to that we would regularly go
through this "I'm out of resources free me some IOVAs" dance with our
existing DMA API implementation where this just triggers an early cycle
of freeing all unused IOVAs followed by a global flush. On z/VM I know
of no situations where this is triggered. That said this signalling is
architected so z/VM may have corner cases where it does this. On our
bare metal hypervisor (no paging) this return code is unused and IOTLB
flushes are simply hardware cache flushes as on bare metal platforms.

[0]
https://lore.kernel.org/qemu-devel/20221028194758.204007-4-mjrosato@xxxxxxxxxxxxx/

That sheds a bit more light, thanks, although I'm still not confident I fully understand the whole setup. AFAICS that patch looks to me like it's putting a fixed limit on the size of the usable address space. That in turn implies that "free some IOVAs and try again" might be a red herring and never going to work; for your current implementation, what that presumably means in reality is "free some IOVAs, resetting the allocator to start allocating lower down in the address space where it will happen to be below that limit, and try again", but the iommu-dma allocator won't do that. If it doesn't know that some arbitrary range below the top of the driver-advertised aperture is unusable, it will just keep allocating IOVAs up there and mappings will always fail.

If the driver can't accurately represent the usable IOVA space via the aperture and/or reserved regions, then this whole approach seems doomed. If on the other hand I've misunderstood and you can actually still use any address, just not all of them at the same time, then it might in fact be considerably easier to skip the flush queue mechanism entirely and implement this internally to the driver - basically make .iotlb_sync a no-op for non-strict DMA domains, put the corresponding RPCIT flush and retry in .sync_map, then allow that to propagate an error back to iommu_map() if the new mapping still hasn't taken.

Thanks,
Robin.

Other than the firmware reserved region helpers which are necessarily a
bit pick-and-mix, I've been trying to remove all the iommu-dma details
from drivers, so I'd really like to maintain that separation if at all
possible.

Hmm, tough one. Having a flush queue implies that we're holding on to
IOVAs that we could free and this is kind of directly architected into
our IOTLB flush with this "free some IOVAs and try again" error return.


Signed-off-by: Niklas Schnelle <schnelle@xxxxxxxxxxxxx>
---
   drivers/iommu/dma-iommu.c  | 14 +++++++++-----
   drivers/iommu/dma-iommu.h  |  1 +
   drivers/iommu/s390-iommu.c |  7 +++++--
   3 files changed, 15 insertions(+), 7 deletions(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 3801cdf11aa8..54e7f63fd0d9 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -188,19 +188,23 @@ static void fq_flush_single(struct iommu_dma_cookie *cookie)
   	spin_unlock_irqrestore(&fq->lock, flags);
   }
-static void fq_flush_timeout(struct timer_list *t)
+void iommu_dma_flush_fq(struct iommu_dma_cookie *cookie)
   {
-	struct iommu_dma_cookie *cookie = from_timer(cookie, t, fq_timer);
-
-	atomic_set(&cookie->fq_timer_on, 0);
   	fq_flush_iotlb(cookie);
-
   	if (cookie->fq_domain->type == IOMMU_DOMAIN_DMA_FQ)
   		fq_flush_percpu(cookie);
   	else
   		fq_flush_single(cookie);
   }
+static void fq_flush_timeout(struct timer_list *t)
+{
+	struct iommu_dma_cookie *cookie = from_timer(cookie, t, fq_timer);
+
+	atomic_set(&cookie->fq_timer_on, 0);
+	iommu_dma_flush_fq(cookie);
+}
+
   static void queue_iova(struct iommu_dma_cookie *cookie,
   		unsigned long pfn, unsigned long pages,
   		struct list_head *freelist)
diff --git a/drivers/iommu/dma-iommu.h b/drivers/iommu/dma-iommu.h
index 942790009292..cac06030aa26 100644
--- a/drivers/iommu/dma-iommu.h
+++ b/drivers/iommu/dma-iommu.h
@@ -13,6 +13,7 @@ int iommu_get_dma_cookie(struct iommu_domain *domain);
   void iommu_put_dma_cookie(struct iommu_domain *domain);
int iommu_dma_init_fq(struct iommu_domain *domain);
+void iommu_dma_flush_fq(struct iommu_dma_cookie *cookie);
void iommu_dma_get_resv_regions(struct device *dev, struct list_head *list); diff --git a/drivers/iommu/s390-iommu.c b/drivers/iommu/s390-iommu.c
index 087bb2acff30..9c2782c4043e 100644
--- a/drivers/iommu/s390-iommu.c
+++ b/drivers/iommu/s390-iommu.c
@@ -538,14 +538,17 @@ static void s390_iommu_iotlb_sync_map(struct iommu_domain *domain,
   {
   	struct s390_domain *s390_domain = to_s390_domain(domain);
   	struct zpci_dev *zdev;
+	int rc;
rcu_read_lock();
   	list_for_each_entry_rcu(zdev, &s390_domain->devices, iommu_list) {
   		if (!zdev->tlb_refresh)
   			continue;
   		atomic64_inc(&s390_domain->ctrs.sync_map_rpcits);
-		zpci_refresh_trans((u64)zdev->fh << 32,
-				   iova, size);
+		rc = zpci_refresh_trans((u64)zdev->fh << 32,
+					iova, size);
+		if (rc == -ENOMEM)
+			iommu_dma_flush_fq(domain->iova_cookie);

Could -ENOMEM ever be returned for some reason on an IOMMU_DOMAIN_DMA or
IOMMU_DOMAIN_UNMANAGED domain?

In theory yes and then iommu_dma_flush_fq() still does the
.flush_iotlb_all to give the hypervisor a chance to look for freed
IOVAs but without flush queues you're really just running out of IOVA
space and that's futile.

This does highlight an important missed issue though. As we don't
return the resulting error from the subsequent .flush_iotlb_all we only
find out if this didn't work once the mapping is used while in our
current DMA API implementation we correctly return DMA_MAPPING_ERROR in
this case. I guess this means we do need error returns from the IOTLB
helpers since in a paged guest this is where we finally find out that
our mapping couldn't be synced to the hypervisor's  shadow table and I
don't really see a way around that. Also there are other error
conditions implied in this shadowing too, for example the hypervisor
could simply fail to pin guest memory and while we can't do anything
about that at least we should fail the mapping operation.


However I can't figure out how this is supposed to work anyway -
.sync_map only gets called if .map claimed that the actual mapping(s)
succeeded, it can't fail itself, and even if it does free up some IOVAs
at this point by draining the flush queue, I don't see how the mapping
then gets retried, or what happens if it still fails after that :/

Thanks,
Robin.

Yeah, this is a bit non obvious and you are correct in that the
architecture requires a subseqeunt IOTLB flush i.e. retry for the range
that returned the error. And your last point is then exactly the issue
above that we miss if the retry still failed.

As for the good path, in the mapping operation but before the .sync_map
we have updated the IOMMU translation table so the translation is the
recorded but not synced to the hypervisor's shadow table. Now when the
.sync_map is called and the IOTLB flush returns -ENOMEM then
iommu_dma_flush_fq() will call .flush_iotlb_all which causes the
hypervisor to look at the entire guest translation table and shadow all
translations that were not yet shadowed. I.e. the .flush_iotlb_all
"retries" the failed .sync_map.


   	}
   	rcu_read_unlock();
   }





[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Kernel Development]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite Info]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Linux Media]     [Device Mapper]

  Powered by Linux