On Wed, Mar 2, 2022 at 5:43 PM Hugh Dickins <hughd@xxxxxxxxxx> wrote: > > Migration entries do not contribute to a page's reference count: move > __split_huge_pmd_locked()'s page_ref_add() into pmd_migration's else > block (along with the page_count() check - a page is quite likely to > to have reference count frozen to 0 when a migration entry is found). > > This will fix a very rare anonymous memory leak, after a split_huge_pmd() > raced with an anon split_huge_page() or an anon THP migrate_pages(): since > the wrongly raised refcount stopped the page (perhaps small, perhaps huge, > depending on when the race hit) from ever being freed. At first I thought > there were worse risks, from prematurely unfreezing a frozen page: but now > think that would only affect page cache pages, which do not come this way > (except for anonymous pages in swap cache, perhaps). Thanks for catching this. I agree there may be anon memory leak due to bumped refcount. But I don't think it could affect page cache page since that code (bumping refcount) is never called for page cache page IIUC. The patch looks good to me. Reviewed-by: Yang Shi <shy828301@xxxxxxxxx> > > Fixes: ec0abae6dcdf ("mm/thp: fix __split_huge_pmd_locked() for migration PMD") > Signed-off-by: Hugh Dickins <hughd@xxxxxxxxxx> > --- > That's an unfair "Fixes": it did not introduce the problem, but it > missed this aspect of the problem; and will be a good guide to where this > refix should go if stable backports are asked for. > > mm/huge_memory.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -2039,9 +2039,9 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, > young = pmd_young(old_pmd); > soft_dirty = pmd_soft_dirty(old_pmd); > uffd_wp = pmd_uffd_wp(old_pmd); > + VM_BUG_ON_PAGE(!page_count(page), page); > + page_ref_add(page, HPAGE_PMD_NR - 1); > } > - VM_BUG_ON_PAGE(!page_count(page), page); > - page_ref_add(page, HPAGE_PMD_NR - 1); > > /* > * Withdraw the table only after we mark the pmd entry invalid.