From: Jason Gunthorpe <jgg@xxxxxxxxxx> commit 2e4c02fdecf2f6f55cefe48cb82d93fa4f8e2204 upstream. cachable and mmkey.rb_key together are used by mlx5_revoke_mr() to put the MR/mkey back into the cache. In all cases they should be set correctly. alloc_cacheable_mr() was setting cachable but not filling rb_key, resulting in cache_ent_find_and_store() bucketing them all into a 0 length entry. implicit_get_child_mr()/mlx5_ib_alloc_implicit_mr() failed to set cachable or rb_key at all, so the cache was not working at all for implicit ODP. Cc: stable@xxxxxxxxxxxxxxx Fixes: 8c1185fef68c ("RDMA/mlx5: Change check for cacheable mkeys") Fixes: dd1b913fb0d0 ("RDMA/mlx5: Cache all user cacheable mkeys on dereg MR flow") Signed-off-by: Jason Gunthorpe <jgg@xxxxxxxxxx> Link: https://lore.kernel.org/r/7778c02dfa0999a30d6746c79a23dd7140a9c729.1716900410.git.leon@xxxxxxxxxx Signed-off-by: Leon Romanovsky <leon@xxxxxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- drivers/infiniband/hw/mlx5/mr.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) --- a/drivers/infiniband/hw/mlx5/mr.c +++ b/drivers/infiniband/hw/mlx5/mr.c @@ -718,6 +718,8 @@ static struct mlx5_ib_mr *_mlx5_mr_cache } mr->mmkey.cache_ent = ent; mr->mmkey.type = MLX5_MKEY_MR; + mr->mmkey.rb_key = ent->rb_key; + mr->mmkey.cacheable = true; init_waitqueue_head(&mr->mmkey.wait); return mr; } @@ -1168,7 +1170,6 @@ static struct mlx5_ib_mr *alloc_cacheabl mr->ibmr.pd = pd; mr->umem = umem; mr->page_shift = order_base_2(page_size); - mr->mmkey.cacheable = true; set_mr_fields(dev, mr, umem->length, access_flags, iova); return mr; Patches currently in stable-queue which might be from jgg@xxxxxxxxxx are queue-6.9/rdma-mlx5-ensure-created-mkeys-always-have-a-populated-rb_key.patch queue-6.9/vfio-pci-collect-hot-reset-devices-to-local-buffer.patch queue-6.9/rdma-mlx5-follow-rb_key.ats-when-creating-new-mkeys.patch queue-6.9/rdma-rxe-fix-data-copy-for-ib_send_inline.patch queue-6.9/rdma-mlx5-remove-extra-unlock-on-error-path.patch