Re: [PATCH v4 1/2] iommu/s390: Fix race with release_device ops

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



---8<---
> > > 
> > > I do have a working prototype of using the common implementation but
> > > the big problem that I'm still searching a solution for is its
> > > performance with a virtualized IOMMU where IOTLB flushes (RPCIT on
> > > s390) are used for shadowing and are expensive and serialized. The
> > > optimization we used so far for unmap, only doing one global IOTLB
> > > flush once we run out of IOVA space, is just too much better in that
> > > scenario to just ignore. As one data point, on an NVMe I get about
> > > _twice_ the IOPS when using our existing scheme compared to strict
> > > mode. Which makes sense as IOTLB flushes are known as the bottleneck
> > > and optimizing unmap like that reduces them by almost half. Queued
> > > flushing is still much worse likely due to serialization of the
> > > shadowing, though again it works great on LPAR. To make sure it's not
> > > due to some bug in the IOMMU driver I even tried converting our
> > > existing DMA driver to layer on top of the IOMMU driver with the same
> > > result.
> > 
> > FWIW, can you approximate the same behaviour by just making IOVA_FQ_SIZE 
> > and IOVA_FQ_TIMEOUT really big, and deferring your zpci_refresh_trans() 
> > hook from .unmap to .flush_iotlb_all when in non-strict mode?
> > 
> > I'm not against the idea of trying to support this mode of operation 
> > better in the common code, since it seems like it could potentially be 
> > useful for *any* virtualised scenario where trapping to invalidate is 
> > expensive and the user is happy to trade off the additional address 
> > space/memory overhead (and even greater loss of memory protection) 
> > against that.
> > 
> > Robin.
> 
> Ah thanks for reminding me. I had tried that earlier but quickly ran
> into the size limit of per-CPU allocations. This time I turned the
> "struct iova_fq_entry entries" member into a pointer and allocted that
> with vmalloc(). Also thankfully the ops->flush_iotlb_all(), iommu_iotlb_sync(), and iommu_iotlb_sync_map() already perfectly match
> our needs.
> 
> Okay, this is _very_ interesting. With the above cranking IOVA_FQ_SIZE
> all the way to 32768 and IOVA_FQ_TIMEOUT to 4000 ms, I can get to about
> 91% of the performance of our scheme (layered on the IOMMU API). That
> also seems to be the limit. I guess there is also more overhead than
> with our bitset IOVA allocation that doesn't need any bookkeeping
> besides a "lazily unmapped" bit per page. With a more sane IOVA_FQ_SIZE
> of 8192 and 100 ms timeout I still get about 76% of the performance.
> 
> Interestingly with the above changes but default values for
> IOVA_FQ_SIZE/IOVA_FQ_TIMEOUT things are much worse than even strict
> mode (~50%) and I get less than 8% the IOPS with this NVMe.
> 
> So yeah it seems you're right and one can largely emulate our scheme
> with this. I do wonder if we could go further and do a "flush on
> running out of IOVAs" domain type with acceptable changes. My rough
> idea would be to collect lazily freed IOVAs in the same data structure
> as the free IOVAs, then on running out of those one can simply do a
> global IOTLB flush and the lazily freed IOVAs become the new free
> IOVAs. With that the global reset would be even cheaper than with our
> bitmaps. 

Ok disregard the last part, that's obviously not how the IOVA
allocation works. Will have to take an actual look.

> For a generic case one would of course also need to track the
> gather->freelist that we don't use in s390 but e.g. virtio-iommu
> doesn't seem to use that either. What do you think?
> 






[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Kernel Development]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite Info]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Linux Media]     [Device Mapper]

  Powered by Linux