On 26.08.24 22:43, Peter Xu wrote:
Teach folio_walk_start() to recognize special pmd/pud mappings, and fail
them properly as it means there's no folio backing them.
Cc: David Hildenbrand <david@xxxxxxxxxx>
Signed-off-by: Peter Xu <peterx@xxxxxxxxxx>
---
mm/pagewalk.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/pagewalk.c b/mm/pagewalk.c
index cd79fb3b89e5..12be5222d70e 100644
--- a/mm/pagewalk.c
+++ b/mm/pagewalk.c
@@ -753,7 +753,7 @@ struct folio *folio_walk_start(struct folio_walk *fw,
fw->pudp = pudp;
fw->pud = pud;
- if (!pud_present(pud) || pud_devmap(pud)) {
+ if (!pud_present(pud) || pud_devmap(pud) || pud_special(pud)) {
spin_unlock(ptl);
goto not_found;
} else if (!pud_leaf(pud)) {
@@ -783,7 +783,7 @@ struct folio *folio_walk_start(struct folio_walk *fw,
fw->pmdp = pmdp;
fw->pmd = pmd;
- if (pmd_none(pmd)) {
+ if (pmd_none(pmd) || pmd_special(pmd)) {
spin_unlock(ptl);
goto not_found;
} else if (!pmd_leaf(pmd)) {
As raised, this is not the right way to to it. You should follow what
CONFIG_ARCH_HAS_PTE_SPECIAL and vm_normal_page() does.
It's even spelled out in vm_normal_page_pmd() that at the time it was
introduced there was no pmd_special(), so there was no way to handle that.
diff --git a/mm/memory.c b/mm/memory.c
index f0cf5d02b4740..272445e9db147 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -672,15 +672,29 @@ struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,
{
unsigned long pfn = pmd_pfn(pmd);
- /*
- * There is no pmd_special() but there may be special pmds, e.g.
- * in a direct-access (dax) mapping, so let's just replicate the
- * !CONFIG_ARCH_HAS_PTE_SPECIAL case from vm_normal_page() here.
- */
+ if (IS_ENABLED(CONFIG_ARCH_HAS_PMD_SPECIAL)) {
+ if (likely(!pmd_special(pmd)))
+ goto check_pfn;
+ if (vma->vm_ops && vma->vm_ops->find_special_page)
+ return vma->vm_ops->find_special_page(vma, addr);
+ if (vma->vm_flags & (VM_PFNMAP | VM_MIXEDMAP))
+ return NULL;
+ if (is_huge_zero_pmd(pmd))
+ return NULL;
+ if (pmd_devmap(pmd))
+ /* See vm_normal_page() */
+ return NULL;
+ return NULL;
+ }
+
+ /* !CONFIG_ARCH_HAS_PMD_SPECIAL case follows: */
+
if (unlikely(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))) {
if (vma->vm_flags & VM_MIXEDMAP) {
if (!pfn_valid(pfn))
return NULL;
+ if (is_huge_zero_pmd(pmd))
+ return NULL;
goto out;
} else {
unsigned long off;
@@ -692,6 +706,11 @@ struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,
}
}
+ /*
+ * For historical reasons, these might not have pmd_special() set,
+ * so we'll check them manually, in contrast to vm_normal_page().
+ */
+check_pfn:
if (pmd_devmap(pmd))
return NULL;
if (is_huge_zero_pmd(pmd))
We should then look into mapping huge zeropages also with pmd_special.
pmd_devmap we'll leave alone until removed. But that's indeoendent of your series.
I wonder if CONFIG_ARCH_HAS_PTE_SPECIAL is sufficient and we don't need additional
CONFIG_ARCH_HAS_PMD_SPECIAL.
As I said, if you need someone to add vm_normal_page_pud(), I can handle that.
--
Cheers,
David / dhildenb