On 6/6/22 02:30, John Garry wrote:
As reported in [0], DMA mappings whose size exceeds the IOMMU IOVA caching limit may see a big performance hit. This series introduces a new DMA mapping API, dma_opt_mapping_size(), so that drivers may know this limit when performance is a factor in the mapping. Robin didn't like using dma_max_mapping_size() for this [1]. The SCSI core code is modified to use this limit. I also added a patch for libata-scsi as it does not currently honour the shost max_sectors limit. Note: Christoph has previously kindly offered to take this series via the dma-mapping tree, so I think that we just need an ack from the IOMMU guys now. [0] https://lore.kernel.org/linux-iommu/20210129092120.1482-1-thunder.leizhen@xxxxxxxxxx/ [1] https://lore.kernel.org/linux-iommu/f5b78c9c-312e-70ab-ecbb-f14623a4b6e3@xxxxxxx/
Regarding [0], that patch reverts commit 4e89dce72521 ("iommu/iova: Retry from last rb tree node if iova search fails"). Reading the description of that patch, it seems to me that the iova allocator can be improved. Shouldn't the iova allocator be improved such that we don't need this patch series? There are algorithms that handle fragmentation much better than the current iova allocator algorithm, e.g. the https://en.wikipedia.org/wiki/Buddy_memory_allocation algorithm.
Thanks, Bart.