On Wed, Feb 05, 2025 at 04:17:16PM -0700, Alex Williamson wrote: > As GPU BAR sizes increase, the overhead of DMA mapping pfnmap ranges has > become a significant overhead for VMs making use of device assignment. > Not only does each mapping require upwards of a few seconds, but BARs > are mapped in and out of the VM address space multiple times during > guest boot. Also factor in that multi-GPU configurations are > increasingly commonplace and BAR sizes are continuing to increase. > Configurations today can already be delayed minutes during guest boot. > > We've taken steps to make Linux a better guest by batching PCI BAR > sizing operations[1], but it only provides and incremental improvement. > > This series attempts to fully address the issue by leveraging the huge > pfnmap support added in v6.12. When we insert pfnmaps using pud and pmd > mappings, we can later take advantage of the knowledge of the mapping > level page mask to iterate on the relevant mapping stride. In the > commonly achieved optimal case, this results in a reduction of pfn > lookups by a factor of 256k. For a local test system, an overhead of > ~1s for DMA mapping a 32GB PCI BAR is reduced to sub-millisecond (8M > page sized operations reduced to 32 pud sized operations). > > Please review, test, and provide feedback. I hope that mm folks can > ack the trivial follow_pfnmap_args update to provide the mapping level > page mask. Naming is hard, so any preference other than pgmask is > welcome. Thanks, > > Alex > > [1]https://lore.kernel.org/all/20250120182202.1878581-1-alex.williamson@xxxxxxxxxx/ > > > Alex Williamson (5): > vfio/type1: Catch zero from pin_user_pages_remote() > vfio/type1: Convert all vaddr_get_pfns() callers to use vfio_batch > vfio/type1: Use vfio_batch for vaddr_get_pfns() > mm: Provide page mask in struct follow_pfnmap_args > vfio/type1: Use mapping page mask for pfnmaps FWIW: Reviewed-by: Peter Xu <peterx@xxxxxxxxxx> Thanks, -- Peter Xu