The buffer that holds the page DMA addresses is sized off umem->nmap. This can potentially cause out of bound accesses on the PBL array when iterating the umem DMA-mapped SGL. This is because if umem pages are combined, umem->nmap can be much lower than the number of system pages in umem. Use umem->npages to size this buffer. Cc: Moni Shoua <monis@xxxxxxxxxxxx> Signed-off-by: Shiraz Saleem <shiraz.saleem@xxxxxxxxx> --- drivers/infiniband/sw/rxe/rxe_mr.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index 42f0f25..9606ec5 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -179,7 +179,7 @@ int rxe_mem_init_user(struct rxe_pd *pd, u64 start, } mem->umem = umem; - num_buf = umem->nmap; + num_buf = umem->npages; rxe_mem_init(access, mem); -- 1.8.3.1