On 09.08.24 19:13, Vincent Donnefort wrote:
Hi,
Sorry, reviving this thread as I have ran into something weird:
On Wed, Dec 20, 2023 at 11:44:32PM +0100, David Hildenbrand wrote:
Let's convert insert_page_into_pte_locked() and do_set_pmd(). While at it,
perform some folio conversion.
Reviewed-by: Yin Fengwei <fengwei.yin@xxxxxxxxx>
Reviewed-by: Ryan Roberts <ryan.roberts@xxxxxxx>
Signed-off-by: David Hildenbrand <david@xxxxxxxxxx>
---
mm/memory.c | 14 ++++++++------
1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index 7f957e5a84311..c77d3952d261f 100644
--- a/mm/memory.c
+++ b/mm/memory.c
[...]
vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page)
{
+ struct folio *folio = page_folio(page);
struct vm_area_struct *vma = vmf->vma;
bool write = vmf->flags & FAULT_FLAG_WRITE;
unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
@@ -4418,8 +4421,7 @@ vm_fault_t do_set_pmd(struct vm_fault *vmf, struct page *page)
if (!thp_vma_suitable_order(vma, haddr, PMD_ORDER))
return ret;
- page = compound_head(page);
- if (compound_order(page) != HPAGE_PMD_ORDER)
+ if (page != &folio->page || folio_order(folio) != HPAGE_PMD_ORDER)
return ret;
Is this `page != &folio->page` expected? I believe this check wasn't there
before as we had `page = compound_head()`.
It breaks the installation of a PMD level mapping for shmem when the fault
address is in the middle of this block. In its fault path, shmem sets
vmf->page = folio_file_page(folio, vmf->pgoff)
which fails this test above.
Already fixed? :)
commit ab1ffc86cb5bec1c92387b9811d9036512f8f4eb (tag:
mm-hotfixes-stable-2024-06-26-17-28)
Author: Andrew Bresticker <abrestic@xxxxxxxxxxxx>
Date: Tue Jun 11 08:32:16 2024 -0700
mm/memory: don't require head page for do_set_pmd()
--
Cheers,
David / dhildenb