On 16/05/2024 17:50, Honggang LI wrote: > For RDMA Send and Write with IB_SEND_INLINE, the memory buffers > specified in sge list will be placed inline in the Send Request. > > The data should be copied by CPU from the virtual addresses of > corresponding sge list DMA addresses. > > Fixes: 8d7c7c0eeb74 ("RDMA: Add ib_virt_dma_to_page()") > Signed-off-by: Honggang LI <honggangli@xxxxxxx> Good catch. Reviewed-by: Li Zhijian <lizhijian@xxxxxxxxxxx> (BTW, Does it mean current pyverb tests in rdma-core have not covered IB_SEND_INLINE) > --- > drivers/infiniband/sw/rxe/rxe_verbs.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c > index 614581989b38..b94d05e9167a 100644 > --- a/drivers/infiniband/sw/rxe/rxe_verbs.c > +++ b/drivers/infiniband/sw/rxe/rxe_verbs.c > @@ -812,7 +812,7 @@ static void copy_inline_data_to_wqe(struct rxe_send_wqe *wqe, > int i; > > for (i = 0; i < ibwr->num_sge; i++, sge++) { > - memcpy(p, ib_virt_dma_to_page(sge->addr), sge->length); > + memcpy(p, ib_virt_dma_to_ptr(sge->addr), sge->length); > p += sge->length; > } > }