On 03/11/2023 23:04, Bart Van Assche wrote: > > On 11/3/23 02:55, Li Zhijian wrote: >> - return ib_sg_to_pages(ibmr, sgl, sg_nents, sg_offset, rxe_set_page); >> + for_each_sg(sgl, sg, sg_nents, i) { >> + u64 dma_addr = sg_dma_address(sg) + sg_offset; >> + unsigned int dma_len = sg_dma_len(sg) - sg_offset; >> + u64 end_dma_addr = dma_addr + dma_len; >> + u64 page_addr = dma_addr & PAGE_MASK; >> + >> + if (sg_dma_len(sg) == 0) { >> + rxe_dbg_mr(mr, "empty SGE\n"); >> + return -EINVAL; >> + } >> + do { >> + int ret = rxe_store_page(mr, page_addr); >> + if (ret) >> + return ret; >> + >> + page_addr += PAGE_SIZE; >> + } while (page_addr < end_dma_addr); >> + sg_offset = 0; >> + } >> + >> + return ib_sg_to_pages(ibmr, sgl, sg_nents, sg_offset_p, rxe_set_page); >> } > > Is this change necessary? There is already a loop in ib_sg_to_pages() > that splits SG entries that are larger than mr->page_size into entries > with size mr->page_size. I see. My thought was that we are only able to safely access PAGE_SIZE memory scope [page_va, page_va + PAGE_SIZE) from the return of kmap_local_page(page). However when mr->page_size is larger than PAGE_SIZE, we may access the next pages without mapping it. Thanks Zhijian