Re: [PATCH] mm/rmap: do not add fully unmapped large folio to deferred split list

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12.04.24 22:35, Yang Shi wrote:
On Fri, Apr 12, 2024 at 12:33 PM David Hildenbrand <david@xxxxxxxxxx> wrote:

On 12.04.24 16:35, Zi Yan wrote:
On 11 Apr 2024, at 11:46, David Hildenbrand wrote:

On 11.04.24 17:32, Zi Yan wrote:
From: Zi Yan <ziy@xxxxxxxxxx>

In __folio_remove_rmap(), a large folio is added to deferred split list
if any page in a folio loses its final mapping. It is possible that
the folio is unmapped fully, but it is unnecessary to add the folio
to deferred split list at all. Fix it by checking folio mapcount before
adding a folio to deferred split list.

Signed-off-by: Zi Yan <ziy@xxxxxxxxxx>
---
    mm/rmap.c | 9 ++++++---
    1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/mm/rmap.c b/mm/rmap.c
index 2608c40dffad..d599a772e282 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1494,7 +1494,7 @@ static __always_inline void __folio_remove_rmap(struct folio *folio,
             enum rmap_level level)
    {
     atomic_t *mapped = &folio->_nr_pages_mapped;
-   int last, nr = 0, nr_pmdmapped = 0;
+   int last, nr = 0, nr_pmdmapped = 0, mapcount = 0;
     enum node_stat_item idx;
             __folio_rmap_sanity_checks(folio, page, nr_pages, level);
@@ -1506,7 +1506,8 @@ static __always_inline void __folio_remove_rmap(struct folio *folio,
                     break;
             }
   -         atomic_sub(nr_pages, &folio->_large_mapcount);
+           mapcount = atomic_sub_return(nr_pages,
+                                        &folio->_large_mapcount) + 1;

That becomes a new memory barrier on some archs. Rather just re-read it below. Re-reading should be fine here.

Would atomic_sub_return_relaxed() work? Originally I was using atomic_read(mapped)
below, but to save an atomic op, I chose to read mapcount here.

Some points:

(1) I suggest reading about atomic get/set vs. atomic RMW vs. atomic
RMW that return a value -- and how they interact with memory barriers.
Further, how relaxed variants are only optimized on some architectures.

atomic_read() is usually READ_ONCE(), which is just an "ordinary" memory
access that should not be refetched. Usually cheaper than most other stuff
that involves atomics.

(2) We can either use folio_large_mapcount() == 0 or !atomic_read(mapped)
to figure out if the folio is now completely unmapped.

(3) There is one fundamental issue: if we are not batch-unmapping the whole
thing, we will still add the folios to the deferred split queue. Migration
would still do that, or if there are multiple VMAs covering a folio.

Maybe we can let rmap remove code not touch deferred split queue in
migration path at all because we know all the PTEs will be converted
to migration entries. And all the migration entries will be converted
back to PTEs regardless of whether try_to_migrate succeeded or not.

It's would be just another bandaid I think :/ Maybe a worthwile one, not sure.

--
Cheers,

David / dhildenb





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux