This is a note to let you know that I've just added the patch titled RDMA/umem: Fix ib_umem_find_best_pgsz() for mappings that cross a page boundary to the 5.9-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: rdma-umem-fix-ib_umem_find_best_pgsz-for-mappings-th.patch and it can be found in the queue-5.9 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. commit fdd6770fb60c12dbaac1c13f24588ef4d74457ca Author: Jason Gunthorpe <jgg@xxxxxxxxxx> Date: Fri Sep 4 19:41:42 2020 -0300 RDMA/umem: Fix ib_umem_find_best_pgsz() for mappings that cross a page boundary [ Upstream commit a40c20dabdf9045270767c75918feb67f0727c89 ] It is possible for a single SGL to span an aligned boundary, eg if the SGL is 61440 -> 90112 Then the length is 28672, which currently limits the block size to 32k. With a 32k page size the two covering blocks will be: 32768->65536 and 65536->98304 However, the correct answer is a 128K block size which will span the whole 28672 bytes in a single block. Instead of limiting based on length figure out which high IOVA bits don't change between the start and end addresses. That is the highest useful page size. Fixes: 4a35339958f1 ("RDMA/umem: Add API to find best driver supported page size in an MR") Link: https://lore.kernel.org/r/1-v2-270386b7e60b+28f4-umem_1_jgg@xxxxxxxxxx Reviewed-by: Leon Romanovsky <leonro@xxxxxxxxxx> Reviewed-by: Shiraz Saleem <shiraz.saleem@xxxxxxxxx> Signed-off-by: Jason Gunthorpe <jgg@xxxxxxxxxx> Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx> diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c index 831bff8d52e54..09539dd764ec0 100644 --- a/drivers/infiniband/core/umem.c +++ b/drivers/infiniband/core/umem.c @@ -156,8 +156,13 @@ unsigned long ib_umem_find_best_pgsz(struct ib_umem *umem, return 0; va = virt; - /* max page size not to exceed MR length */ - mask = roundup_pow_of_two(umem->length); + /* The best result is the smallest page size that results in the minimum + * number of required pages. Compute the largest page size that could + * work based on VA address bits that don't change. + */ + mask = pgsz_bitmap & + GENMASK(BITS_PER_LONG - 1, + bits_per((umem->length - 1 + virt) ^ virt)); /* offset into first SGL */ pgoff = umem->address & ~PAGE_MASK;