On Wed, Jan 13, 2021 at 04:43:32PM -0800, Dan Williams wrote: > The conversion to move pfn_to_online_page() internal to > soft_offline_page() missed that the get_user_pages() reference taken by > the madvise() path needs to be dropped when pfn_to_online_page() fails. > Note the direct sysfs-path to soft_offline_page() does not perform a > get_user_pages() lookup. > > When soft_offline_page() is handed a pfn_valid() && > !pfn_to_online_page() pfn the kernel hangs at dax-device shutdown due to > a leaked reference. > > Fixes: feec24a6139d ("mm, soft-offline: convert parameter to pfn") > Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> > Cc: Naoya Horiguchi <nao.horiguchi@xxxxxxxxx> > Cc: Michal Hocko <mhocko@xxxxxxxxxx> > Reviewed-by: David Hildenbrand <david@xxxxxxxxxx> > Reviewed-by: Oscar Salvador <osalvador@xxxxxxx> > Cc: <stable@xxxxxxxxxxxxxxx> > Signed-off-by: Dan Williams <dan.j.williams@xxxxxxxxx> I'm OK if we don't have any other better approach, but the proposed changes make code a little messy, and I feel that get_user_pages() might be the right place to fix. Is get_user_pages() expected to return struct page with holding refcount for offline valid pages? I thought that such pages are only used by drivers for dax-devices, but that might be wrong. Can I ask for a little more explanation from this perspective? Thanks, Naoya Horiguchi > --- > mm/memory-failure.c | 20 ++++++++++++++++---- > 1 file changed, 16 insertions(+), 4 deletions(-) > > diff --git a/mm/memory-failure.c b/mm/memory-failure.c > index 5a38e9eade94..78b173c7190c 100644 > --- a/mm/memory-failure.c > +++ b/mm/memory-failure.c > @@ -1885,6 +1885,12 @@ static int soft_offline_free_page(struct page *page) > return rc; > } > > +static void put_ref_page(struct page *page) > +{ > + if (page) > + put_page(page); > +} > + > /** > * soft_offline_page - Soft offline a page. > * @pfn: pfn to soft-offline > @@ -1910,20 +1916,26 @@ static int soft_offline_free_page(struct page *page) > int soft_offline_page(unsigned long pfn, int flags) > { > int ret; > - struct page *page; > bool try_again = true; > + struct page *page, *ref_page = NULL; > + > + WARN_ON_ONCE(!pfn_valid(pfn) && (flags & MF_COUNT_INCREASED)); > > if (!pfn_valid(pfn)) > return -ENXIO; > + if (flags & MF_COUNT_INCREASED) > + ref_page = pfn_to_page(pfn); > + > /* Only online pages can be soft-offlined (esp., not ZONE_DEVICE). */ > page = pfn_to_online_page(pfn); > - if (!page) > + if (!page) { > + put_ref_page(ref_page); > return -EIO; > + } > > if (PageHWPoison(page)) { > pr_info("%s: %#lx page already poisoned\n", __func__, pfn); > - if (flags & MF_COUNT_INCREASED) > - put_page(page); > + put_ref_page(ref_page); > return 0; > } > >