On Sun, Nov 15, 2020 at 01:43:04PM +0200, Leon Romanovsky wrote: > From: Leon Romanovsky <leonro@xxxxxxxxxx> > > Changelog: > v1: > * Added patch for raw QP > * Fixed commit messages > v0: https://lore.kernel.org/lkml/20201026132635.1337663-1-leon@xxxxxxxxxx > > ------------------------- > >From Jason: > > Move the remaining cases working with umems to use versions of > ib_umem_find_best_pgsz() tailored to the calculations the devices > requires. > > Unlike a MR there is no IOVA, instead a page offset from the starting page > is possible, with various restrictions. > > Compute the best page size to meet the page_offset restrictions. > > Thanks > > Jason Gunthorpe (7): > RDMA/mlx5: Use ib_umem_find_best_pgoff() for SRQ > RDMA/mlx5: Use mlx5_umem_find_best_quantized_pgoff() for WQ > RDMA/mlx5: Directly compute the PAS list for raw QP RQ's > RDMA/mlx5: Use mlx5_umem_find_best_quantized_pgoff() for QP > RDMA/mlx5: mlx5_umem_find_best_quantized_pgoff() for CQ > RDMA/mlx5: Use ib_umem_find_best_pgsz() for devx > RDMA/mlx5: Lower setting the umem's PAS for SRQ Applied to for-next Jason