On Wed, Jan 05, 2022 at 09:18:41AM -0500, trondmy@xxxxxxxxxx wrote: > From: Trond Myklebust <trond.myklebust@xxxxxxxxxxxxxxx> > > When doing RPC/RDMA, we're seeing a kernel panic when __ib_umem_release() > iterates over the scatter gather list and hits NULL pages. > > It turns out that commit 79fbd3e1241c ended up changing the iteration > from being over only the mapped entries to being over the original list > size. You mean this? - for_each_sg(umem->sg_head.sgl, sg, umem->sg_nents, i) + for_each_sgtable_sg(&umem->sgt_append.sgt, sg, i) I don't see what changed there? The invarient should be that umem->sg_nents == sgt->orig_nents > @@ -55,7 +55,7 @@ static void __ib_umem_release(struct ib_device *dev, struct ib_umem *umem, int d > ib_dma_unmap_sgtable_attrs(dev, &umem->sgt_append.sgt, > DMA_BIDIRECTIONAL, 0); > > - for_each_sgtable_sg(&umem->sgt_append.sgt, sg, i) > + for_each_sgtable_dma_sg(&umem->sgt_append.sgt, sg, i) > unpin_user_page_range_dirty_lock(sg_page(sg), Calling sg_page() from under a dma_sg iterator is unconditionally wrong.. More likely your case is something has gone wrong when the sgtable was created and it has the wrong value in orig_nents.. Jason