On Wed, 26 May 2021, Yang Shi wrote: > When debugging the bug reported by Wang Yugui [1], try_to_unmap() may > fail, but the first VM_BUG_ON_PAGE() just checks page_mapcount() however > it may miss the failure when head page is unmapped but other subpage is > mapped. Then the second DEBUG_VM BUG() that check total mapcount would > catch it. This may incur some confusion. And this is not a fatal issue, > so consolidate the two DEBUG_VM checks into one VM_WARN_ON_ONCE_PAGE(). > > [1] https://lore.kernel.org/linux-mm/20210412180659.B9E3.409509F4@xxxxxxxxxxxx/ > > Reviewed-by: Zi Yan <ziy@xxxxxxxxxx> > Signed-off-by: Yang Shi <shy828301@xxxxxxxxx> Acked-by: Hugh Dickins <hughd@xxxxxxxxxx> Thanks: and 2/2 already has my Ack, correct. > --- > v4: Updated the subject and commit log per Hugh. > Reordered the patches per Hugh. > v3: Incorporated the comments from Hugh. Keep Zi Yan's reviewed-by tag > since there is no fundamental change against v2. > v2: Removed dead code and updated the comment of try_to_unmap() per Zi > Yan. > > mm/huge_memory.c | 24 +++++++----------------- > 1 file changed, 7 insertions(+), 17 deletions(-) > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index 19195fca1aee..8827f82c5302 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -2336,15 +2336,15 @@ static void unmap_page(struct page *page) > { > enum ttu_flags ttu_flags = TTU_IGNORE_MLOCK | > TTU_RMAP_LOCKED | TTU_SPLIT_HUGE_PMD; > - bool unmap_success; > > VM_BUG_ON_PAGE(!PageHead(page), page); > > if (PageAnon(page)) > ttu_flags |= TTU_SPLIT_FREEZE; > > - unmap_success = try_to_unmap(page, ttu_flags); > - VM_BUG_ON_PAGE(!unmap_success, page); > + try_to_unmap(page, ttu_flags); > + > + VM_WARN_ON_ONCE_PAGE(page_mapped(page), page); > } > > static void remap_page(struct page *page, unsigned int nr) > @@ -2655,7 +2655,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) > struct deferred_split *ds_queue = get_deferred_split_queue(head); > struct anon_vma *anon_vma = NULL; > struct address_space *mapping = NULL; > - int count, mapcount, extra_pins, ret; > + int extra_pins, ret; > pgoff_t end; > > VM_BUG_ON_PAGE(is_huge_zero_page(head), head); > @@ -2714,7 +2714,6 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) > } > > unmap_page(head); > - VM_BUG_ON_PAGE(compound_mapcount(head), head); > > /* block interrupt reentry in xa_lock and spinlock */ > local_irq_disable(); > @@ -2732,9 +2731,7 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) > > /* Prevent deferred_split_scan() touching ->_refcount */ > spin_lock(&ds_queue->split_queue_lock); > - count = page_count(head); > - mapcount = total_mapcount(head); > - if (!mapcount && page_ref_freeze(head, 1 + extra_pins)) { > + if (page_ref_freeze(head, 1 + extra_pins)) { > if (!list_empty(page_deferred_list(head))) { > ds_queue->split_queue_len--; > list_del(page_deferred_list(head)); > @@ -2754,16 +2751,9 @@ int split_huge_page_to_list(struct page *page, struct list_head *list) > __split_huge_page(page, list, end); > ret = 0; > } else { > - if (IS_ENABLED(CONFIG_DEBUG_VM) && mapcount) { > - pr_alert("total_mapcount: %u, page_count(): %u\n", > - mapcount, count); > - if (PageTail(page)) > - dump_page(head, NULL); > - dump_page(page, "total_mapcount(head) > 0"); > - BUG(); > - } > spin_unlock(&ds_queue->split_queue_lock); > -fail: if (mapping) > +fail: > + if (mapping) > xa_unlock(&mapping->i_pages); > local_irq_enable(); > remap_page(head, thp_nr_pages(head)); > -- > 2.26.2 > >