Re: [PATCH v3] mm/gup: Allow real explicit breaking of COW

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Aug 21, 2020 at 10:00:59AM -0700, Linus Torvalds wrote:
> On Fri, Aug 21, 2020 at 8:48 AM Jan Kara <jack@xxxxxxx> wrote:
> >
> > I was more concerned about the case where you decide to writeably map (i.e.
> > wp_page_reuse() path) a PageKsm() page.
> 
> Yeah, so I think what I do is stricter than what we used to do - any
> KSM page will never be re-used, simply because the KSM part will have
> incremented the page count.

IIUC, Jan wanted to point out the fact that KSM didn't increase page count for
stable pages (reasons are above get_ksm_page()).

> 
> So as far as I can tell, with that patch we will never ever share
> except for the "I really am the _only_ user of the page, there are no
> KSM or swap cache pages" case.
> 
> That's the whole point of the patch. Get rid of all the games. If
> there is *any* possible other use - be it KSM or swap cache or
> *anything*, we don't try to re-use it.
> 
> > And also here I was more concerned that page_mapcount != 1 || page_count !=
> > 1 check could be actually a weaker check than what reuse_swap_page() does.
> 
> If that is the case, then yes, that would be a problem.
> 
> But really, if page_count() == 1, then we're the only possible thing
> that holds that page. Nothing else can have a reference to it - by
> definition.

Do we still at least need to check the swap count if PageSwapCache(page)?

Besides a complete cleanup, I'm now thinking whether we should use a smaller
change like below.  IMHO we can still simplify the ksm special case before
taking the page lock. Since ksm page should be rare in general, then it seems
not worth it to let every single cow to check against it:

diff --git a/mm/memory.c b/mm/memory.c
index 602f4283122f..b852d393bcc7 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2928,9 +2928,6 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf)
         */
        if (PageAnon(vmf->page)) {
                int total_map_swapcount;
-               if (PageKsm(vmf->page) && (PageSwapCache(vmf->page) ||
-                                          page_count(vmf->page) != 1))
-                       goto copy;
                if (!trylock_page(vmf->page)) {
                        get_page(vmf->page);
                        pte_unmap_unlock(vmf->pte, vmf->ptl);
@@ -2946,6 +2943,10 @@ static vm_fault_t do_wp_page(struct vm_fault *vmf)
                        }
                        put_page(vmf->page);
                }
+               if (page_count(vmf->page) != 1) {
+                       unlock_page(vmf->page);
+                       goto copy;
+               }
                if (PageKsm(vmf->page)) {
                        bool reused = reuse_ksm_page(vmf->page, vmf->vma,
                                                     vmf->address);

So we check page_count() (which covers KSM or normal pages) after we've got the
page lock, while we keep all the rest.  It's also safe for the removed
condition of PageSwapCache() && PageKsm() because reuse_ksm_page() later on
will check PageSwapCache() again.

Thanks,

-- 
Peter Xu





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux