On 12/1/22 09:39, Jason Gunthorpe wrote: > On Thu, Dec 01, 2022 at 09:38:10AM -0600, Bob Pearson wrote: > >> Further, looking at ipoib as an example, it builds sge lists using the lkey from get_dma_mr() >> and sets the sge->addr to a kernel virtual memory address after previously calling >> ib_dma_map_single() so the addresses are mapped for dma access and visible before use. >> They are unmapped after the read/write operation completes. What is the point of kmapping >> the address after dma mapping them? > > Because not everything is ipoib, and things like block will map sgls > with struct pages, not kva. > > Jason OK it's working now but there is a bug in your rxe_mr_fill_pages_from_sgt() routine. You have a if (xas_xa_index && WARN_ON(sg_iter.sg_pgoffset % PAGE_SIZE)) {...} which seems to assume that sg_pgoffset contains the byte offset in the current page. But looking at __sg_page_iter_next() it appears that it is the number of pages offset in the current sg entry which results in a splat when I run ib_send_bw. Bob