On 04.07.24 06:30, Oscar Salvador wrote:
HugeTLB pages can be cont-pmd mapped, so teach walk_pmd_range to
handle those.
This will save us some cycles as we do it in one-shot instead of
calling in multiple times.
Signed-off-by: Oscar Salvador <osalvador@xxxxxxx>
---
include/linux/pgtable.h | 12 ++++++++++++
mm/pagewalk.c | 12 +++++++++---
2 files changed, 21 insertions(+), 3 deletions(-)
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index 2a6a3cccfc36..3a7b8751747e 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -1914,6 +1914,18 @@ typedef unsigned int pgtbl_mod_mask;
#define __pte_leaf_size(x,y) pte_leaf_size(y)
#endif
+#ifndef pmd_cont
+#define pmd_cont(x) false
+#endif
+
+#ifndef CONT_PMD_SIZE
+#define CONT_PMD_SIZE 0
+#endif
+
+#ifndef CONT_PMDS
+#define CONT_PMDS 0
+#endif
+
/*
* We always define pmd_pfn for all archs as it's used in lots of generic
* code. Now it happens too for pud_pfn (and can happen for larger
diff --git a/mm/pagewalk.c b/mm/pagewalk.c
index d93e77411482..a9c36f9e9820 100644
--- a/mm/pagewalk.c
+++ b/mm/pagewalk.c
@@ -81,11 +81,18 @@ static int walk_pmd_range(pud_t *pud, unsigned long addr, unsigned long end,
const struct mm_walk_ops *ops = walk->ops;
int err = 0;
int depth = real_depth(3);
+ int cont_pmds;
pmd = pmd_offset(pud, addr);
do {
again:
- next = pmd_addr_end(addr, end);
+ if (pmd_cont(*pmd)) {
+ cont_pmds = CONT_PMDS;
+ next = pmd_cont_addr_end(addr, end);
+ } else {
+ cont_pmds = 1;
+ next = pmd_addr_end(addr, end);
+ }
if (pmd_none(*pmd)) {
if (ops->pte_hole)
err = ops->pte_hole(addr, next, depth, walk);
@@ -126,8 +133,7 @@ static int walk_pmd_range(pud_t *pud, unsigned long addr, unsigned long end,
if (walk->action == ACTION_AGAIN)
goto again;
-
- } while (pmd++, addr = next, addr != end);
+ } while (pmd += cont_pmds, addr = next, addr != end);
Similar to my other comment regarding PTE batching, this is very
specific to architectures that support cont-pmds.
Yes, right now we only have that on architectures that support
cont-pmd-sized hugetlb, but Willy is interested in us supporting+mapping
folios > PMD_SIZE, whereby we'd want to batch even without arch-specific
cont-pmd bits.
Similar to the other (pte) case, having a way to generically patch
folios will me more beneficial. Note that cont-pmd/cont-pte is only
relevant for present entries (-> mapping folios).
--
Cheers,
David / dhildenb