On Wed, Jun 2, 2021 at 2:49 PM Robin Murphy <robin.murphy@xxxxxxx> wrote: > >> Thanks for the quick response & patch. I tried it out and indeed it > >> does solve the issue: > > Cool, thanks Jussi. May I infer a Tested-by tag from that? Of course! > Given that the race looks to have been pretty theoretical until now, I'm > not convinced it's worth the bother of digging through the long history > of default domain and DMA ops movement to figure where it started, much > less attempt invasive backports. The flush queue change which made it > apparent only landed in 5.13-rc1, so as long as we can get this in as a > fix in the current cycle we should be golden - in the meantime, note > that booting with "iommu.strict=0" should also restore the expected > behaviour. > > FWIW I do still plan to resend the patch "properly" soon (in all honesty > it wasn't even compile-tested!) BTW, even with the patch there's quite a bit of spin lock contention coming from ice_xmit_xdp_ring->dma_map_page_attrs->...->alloc_iova. CPU load drops from 85% to 20% (~80Mpps, 64b UDP) when iommu is disabled. Is this type of overhead to be expected?