On Fri, Mar 25, 2022 at 04:14:28PM -0400, Rik van Riel wrote: > In some cases it appears the invalidation of a hwpoisoned page > fails because the page is still mapped in another process. This > can cause a program to be continuously restarted and die when > it page faults on the page that was not invalidated. Avoid that > problem by unmapping the hwpoisoned page when we find it. > > Another issue is that sometimes we end up oopsing in finish_fault, > if the code tries to do something with the now-NULL vmf->page. > I did not hit this error when submitting the previous patch because > there are several opportunities for alloc_set_pte to bail out before > accessing vmf->page, and that apparently happened on those systems, > and most of the time on other systems, too. > > However, across several million systems that error does occur a > handful of times a day. It can be avoided by returning VM_FAULT_NOPAGE > which will cause do_read_fault to return before calling finish_fault. I artificially created clean/dirty page cache pages with PageHWPoison flag (with SystemTap), then reproduced NULL pointer dereference by page fault on current mainline branch (with e53ac7374e64). And confirmed that the bug was fixed with this patch, so the fix seems to work. (Maybe I should've done this kind of testing before merging e53ac7374e64, sorry..) Anyway, thank you very much. Tested-by: Naoya Horiguchi <naoya.horiguchi@xxxxxxx> > > Fixes: e53ac7374e64 ("mm: invalidate hwpoison page cache page in fault path") > Cc: Oscar Salvador <osalvador@xxxxxxx> > Cc: Miaohe Lin <linmiaohe@xxxxxxxxxx> > Cc: Naoya Horiguchi <naoya.horiguchi@xxxxxxx> > Cc: Mel Gorman <mgorman@xxxxxxx> > Cc: Johannes Weiner <hannes@xxxxxxxxxxx> > Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> > Cc: stable@xxxxxxxxxxxxxxx > --- > mm/memory.c | 12 ++++++++---- > 1 file changed, 8 insertions(+), 4 deletions(-) > > diff --git a/mm/memory.c b/mm/memory.c > index be44d0b36b18..76e3af9639d9 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -3918,14 +3918,18 @@ static vm_fault_t __do_fault(struct vm_fault *vmf) > return ret; > > if (unlikely(PageHWPoison(vmf->page))) { > + struct page *page = vmf->page; > vm_fault_t poisonret = VM_FAULT_HWPOISON; > if (ret & VM_FAULT_LOCKED) { > + if (page_mapped(page)) > + unmap_mapping_pages(page_mapping(page), > + page->index, 1, false); > /* Retry if a clean page was removed from the cache. */ > - if (invalidate_inode_page(vmf->page)) > - poisonret = 0; > - unlock_page(vmf->page); > + if (invalidate_inode_page(page)) > + poisonret = VM_FAULT_NOPAGE; > + unlock_page(page); > } > - put_page(vmf->page); > + put_page(page); > vmf->page = NULL; > return poisonret; > } > -- > 2.35.1 >