On Wed, Jul 7, 2021 at 1:06 PM Hugh Dickins <hughd@xxxxxxxxxx> wrote: > > Parallel developments in mm/rmap.c have left behind some out-of-date > comments: try_to_migrate_one() also accepts TTU_SYNC (already commented > in try_to_migrate() itself), and try_to_migrate() returns nothing at all. > > TTU_SPLIT_FREEZE has just been deleted, so reword the comment about it in > mm/huge_memory.c; and TTU_IGNORE_ACCESS was removed in 5.11, so delete I just realized this. Currently unmap_page() just unmaps file pages when splitting THP. But it seems this may cause some trouble for page cache speculative get for the below case IIUC. Am I missing something? CPU A CPU B unmap_page() ... freeze refcount find_get_page() -> __page_cache_add_speculative() -> VM_BUG_ON_PAGE(page_count(page) == 0, page); //When CONFIG_TINY_RCU is enabled The race is acceptable, I think we could replace the VM_BUG_ON to page_ref_add_unless(), just like !CONFIG_TINY_RCU case. > the "recently referenced" comment from try_to_unmap_one() (once upon a > time the comment was near the removed codeblock, but they drifted apart). > > Signed-off-by: Hugh Dickins <hughd@xxxxxxxxxx> > --- > mm/huge_memory.c | 2 +- > mm/rmap.c | 7 +------ > 2 files changed, 2 insertions(+), 7 deletions(-) > > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index 8b731d53e9f4..afff3ac87067 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -2331,7 +2331,7 @@ static void remap_page(struct page *page, unsigned int nr) > { > int i; > > - /* If TTU_SPLIT_FREEZE is ever extended to file, remove this check */ > + /* If unmap_page() uses try_to_migrate() on file, remove this check */ > if (!PageAnon(page)) > return; > if (PageTransHuge(page)) { > diff --git a/mm/rmap.c b/mm/rmap.c > index 37c24672125c..746013e282c3 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -1439,8 +1439,6 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, > while (page_vma_mapped_walk(&pvmw)) { > /* > * If the page is mlock()d, we cannot swap it out. > - * If it's recently referenced (perhaps page_referenced > - * skipped over this mm) then we should reactivate it. > */ > if (!(flags & TTU_IGNORE_MLOCK)) { > if (vma->vm_flags & VM_LOCKED) { > @@ -1687,8 +1685,7 @@ void try_to_unmap(struct page *page, enum ttu_flags flags) > * @arg: enum ttu_flags will be passed to this argument. > * > * If TTU_SPLIT_HUGE_PMD is specified any PMD mappings will be split into PTEs > - * containing migration entries. This and TTU_RMAP_LOCKED are the only supported > - * flags. > + * containing migration entries. > */ > static bool try_to_migrate_one(struct page *page, struct vm_area_struct *vma, > unsigned long address, void *arg) > @@ -1928,8 +1925,6 @@ static bool try_to_migrate_one(struct page *page, struct vm_area_struct *vma, > * > * Tries to remove all the page table entries which are mapping this page and > * replace them with special swap entries. Caller must hold the page lock. > - * > - * If is successful, return true. Otherwise, false. > */ > void try_to_migrate(struct page *page, enum ttu_flags flags) > { > -- > 2.26.2 >