On Wed, Apr 1, 2020 at 11:58 PM Michal Hocko <mhocko@xxxxxxxxxx> wrote: > > On Wed 01-04-20 22:17:58, Pingfan Liu wrote: > > This patch is a pure code refinement without any functional changes. > > > > try_to_unmap_one() is shared by try_to_unmap() and try_to_munlock(). As for > > unmap, if try_to_unmap_one() return true, it means the pte has been teared > > down and mapcount dec. > > I haven't really checked the full history of the rmap walk but this is > certainly not the currently implemented semantic of this callback. > Returing true only tells the caller that it should continue with other > VMAs which map the given page. It doesn't really mean that the pte has > been torn down. The munlock case is a nice example of how that is use I did not paste the whole story in commit log, but note them in the code. For munlock, we only care about the page will be put to correct lru. But as commit "As for unmap", it should tear down pte, otherwise the page may be accessed by and old mapping. Also here omit an assumption, for !private device page, e.g. fs-dax, there is no need to mlock them. > properly while migration path for device pages how it is used > incorrectly because it doesn't make any sense to walk other VMAs because > is_device_private_page is a property of the page not the VMA. And that > is the only reason to drop that. > > > Apparently the current code > > > > if (IS_ENABLED(CONFIG_MIGRATION) && (flags & TTU_MIGRATION) && > > is_zone_device_page(page) && !is_device_private_page(page)) > > return true; > > > > conflicts with this logic. > > [...] > > /* > > * @arg: enum ttu_flags will be passed to this argument > > + * > > + * For munlock, return true if @page is not mlocked by @vma without killing pte Here is the note for munlock case Thanks, Pingfan > > + * For unmap, return true after tearing down pte. > > + * For both cases, return false if rmap_walk should be stopped. > > */ > > static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, > > unsigned long address, void *arg) > > @@ -1380,7 +1384,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, > > > > if (IS_ENABLED(CONFIG_MIGRATION) && (flags & TTU_MIGRATION) && > > is_zone_device_page(page) && !is_device_private_page(page)) > > - return true; > > + return false; > > > > if (flags & TTU_SPLIT_HUGE_PMD) { > > split_huge_pmd_address(vma, address, > > @@ -1487,7 +1491,7 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, > > > > if (IS_ENABLED(CONFIG_MIGRATION) && > > (flags & TTU_MIGRATION) && > > - is_zone_device_page(page)) { > > + is_device_private_page(page)) { > > swp_entry_t entry; > > pte_t swp_pte; > > > > -- > > 2.7.5 > > -- > Michal Hocko > SUSE Labs