From: chenqiwu <chenqiwu@xxxxxxxxxx> When finding all the mapping vmas for an anon page by anon_vma, there is a panic risk that one mapping vma or its vm_mm has been released by someone. Like the following crash during kswapd reclaiming pages: Unable to handle kernel NULL pointer dereference at virtual address 00000048 PC is at page_vma_mapped_walk+0x54/0x16c LR is at page_referenced_one+0x44/0x140 [......] CPU: 1 PID: 161 Comm: kswapd0 Call trace: [<ffffff9a080832d0>] el1_da+0x24/0x3c [<ffffff9a0823ea18>] page_vma_mapped_walk+0x54/0x16c [<ffffff9a0823fd8c>] page_referenced_one+0x44/0x140 [<ffffff9a08240ef0>] rmap_walk_anon+0x124/0x168 [<ffffff9a0823fcfc>] page_referenced+0x144/0x190 [<ffffff9a08213e6c>] shrink_active_list+0x25c/0x478 [<ffffff9a082126c8>] kswapd+0x7b0/0x9c8 [<ffffff9a080daffc>] kthread+0x154/0x18c [<ffffff9a0808563c>] ret_from_fork+0x10/0x18 The PC is pointed to the following code line: bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw) { struct mm_struct *mm = pvmw->vma->vm_mm; ...... pgd = pgd_offset(mm, pvmw->address); //PC ...... } Because the current pvmw->vma->vm_mm is a kernel NULL pointer, which causing crash when pgd_offset() dereferences the mm pointer. This patch fixes the problem by ensuring that both the mapping vma and its vm_mm are valid. If not, we just continue to traverse the anon_vma->rb_root to avoid the potential junk pointer dereference. Signed-off-by: chenqiwu <chenqiwu@xxxxxxxxxx> --- mm/rmap.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/mm/rmap.c b/mm/rmap.c index b3e3819..fc42ca2 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1863,6 +1863,9 @@ static void rmap_walk_anon(struct page *page, struct rmap_walk_control *rwc, if (rwc->invalid_vma && rwc->invalid_vma(vma, rwc->arg)) continue; + if (!vma && !vma->vm_mm) + continue; + if (!rwc->rmap_one(page, vma, address, rwc->arg)) break; if (rwc->done && rwc->done(page)) -- 1.9.1