On 11.06.24 23:18, Hugh Dickins wrote:
On Tue, 11 Jun 2024, Andrew Morton wrote:
On Tue, 11 Jun 2024 19:22:03 +0100 Matthew Wilcox <willy@xxxxxxxxxxxxx> wrote:
On Tue, Jun 11, 2024 at 11:06:22AM -0700, Andrew Morton wrote:
On Tue, 11 Jun 2024 17:33:17 +0200 David Hildenbrand <david@xxxxxxxxxx> wrote:
On 11.06.24 17:32, Andrew Bresticker wrote:
The requirement that the head page be passed to do_set_pmd() was added
in commit ef37b2ea08ac ("mm/memory: page_add_file_rmap() ->
folio_add_file_rmap_[pte|pmd]()") and prevents pmd-mapping in the
finish_fault() and filemap_map_pages() paths if the page to be inserted
is anything but the head page for an otherwise suitable vma and pmd-sized
page.
Fixes: ef37b2ea08ac ("mm/memory: page_add_file_rmap() -> folio_add_file_rmap_[pte|pmd]()")
Signed-off-by: Andrew Bresticker <abrestic@xxxxxxxxxxxx>
---
mm/memory.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/mm/memory.c b/mm/memory.c
index 0f47a533014e..a1fce5ddacb3 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4614,8 +4614,9 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page)
if (!thp_vma_suitable_order(vma, haddr, PMD_ORDER))
return ret;
- if (page != &folio->page || folio_order(folio) != HPAGE_PMD_ORDER)
+ if (folio_order(folio) != HPAGE_PMD_ORDER)
return ret;
+ page = &folio->page;
/*
* Just backoff if any subpage of a THP is corrupted otherwise
Acked-by: David Hildenbrand <david@xxxxxxxxxx>
Acked-by: Hugh Dickins <hughd@xxxxxxxxxx>
You know what I'm going to ask ;) I'm assuming that the runtime effects
are "small performance optimization" and that "should we backport the
fix" is "no".
We're going to stop using PMDs to map large folios unless the fault is
within the first 4KiB of the PMD. No idea how many workloads that
affects, but it only needs to be backported as far as v6.8, so we
may as well backport it.
OK, thanks, I pasted the above text and added the cc:stable.
Yes please. My interest in this being that yesterday I discovered
the large drop in ShmemPmdMapped between v6.7 and v6.8, bisected,
and was testing overnight with a patch very much like this one of
Andrew's. I'd been hoping to send mine today, but now no need.
I didn't move it into the hotfixes queue - it's a non-trivial
behavioral change and extra test time seems prudent(?).
It is certainly worth some test soak time, and the bug might have
been masking other issues which may now emerge; but the fix is
just reverting to the old pre-v6.8 behaviour.
Right, I don't expect surprises, really. I'm rather surprised that
nobody noticed and that the usual 0-day benchmarks don't trigger that case.
--
Cheers,
David / dhildenb