On Wed, Jan 05, 2022 at 03:02:34PM +0000, Trond Myklebust wrote: > On Wed, 2022-01-05 at 10:37 -0400, Jason Gunthorpe wrote: > > On Wed, Jan 05, 2022 at 09:18:41AM -0500, trondmy@xxxxxxxxxx wrote: > > > From: Trond Myklebust <trond.myklebust@xxxxxxxxxxxxxxx> > > > > > > When doing RPC/RDMA, we're seeing a kernel panic when > > > __ib_umem_release() > > > iterates over the scatter gather list and hits NULL pages. > > > > > > It turns out that commit 79fbd3e1241c ended up changing the > > > iteration > > > from being over only the mapped entries to being over the original > > > list > > > size. > > > > You mean this? > > > > - for_each_sg(umem->sg_head.sgl, sg, umem->sg_nents, i) > > + for_each_sgtable_sg(&umem->sgt_append.sgt, sg, i) > > > > I don't see what changed there? The invarient should be that > > > > umem->sg_nents == sgt->orig_nents > > > > > @@ -55,7 +55,7 @@ static void __ib_umem_release(struct ib_device > > > *dev, struct ib_umem *umem, int d > > > ib_dma_unmap_sgtable_attrs(dev, &umem- > > > >sgt_append.sgt, > > > DMA_BIDIRECTIONAL, 0); > > > > > > - for_each_sgtable_sg(&umem->sgt_append.sgt, sg, i) > > > + for_each_sgtable_dma_sg(&umem->sgt_append.sgt, sg, i) > > > unpin_user_page_range_dirty_lock(sg_page(sg), > > > > Calling sg_page() from under a dma_sg iterator is unconditionally > > wrong.. > > > > More likely your case is something has gone wrong when the sgtable > > was > > created and it has the wrong value in orig_nents.. > > Can you define "wrong value" in this case? Chuck's RPC/RDMA code > appears to call ib_alloc_mr() with an 'expected maximum number of > entries' (depth) in net/sunrpc/xprtrdma/frwr_ops.c:frwr_mr_init(). > > It then fills that table with a set of n <= depth pages in > net/sunrpc/xprtrdma/frwr_ops.c:frwr_map() and calls ib_dma_map_sg() to > map them, and then adjusts the sgtable with a call to ib_map_mr_sg(). I'm confused, RPC/RDMA should never touch a umem at all. Is this really the other bug where user and kernel MR are getting confused? Jason