On 30.07.24 23:00, David Hildenbrand wrote:
On 30.07.24 22:43, James Houghton wrote:
On Tue, Jul 30, 2024 at 1:03 PM David Hildenbrand <david@xxxxxxxxxx> wrote:
diff --git a/include/linux/mm.h b/include/linux/mm.h
index b100df8cb5857..1b1f40ff00b7d 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2926,6 +2926,12 @@ static inline spinlock_t *pte_lockptr(struct mm_struct *mm, pmd_t *pmd)
return ptlock_ptr(page_ptdesc(pmd_page(*pmd)));
}
+static inline spinlock_t *ptep_lockptr(struct mm_struct *mm, pte_t *pte)
+{
+ BUILD_BUG_ON(IS_ENABLED(CONFIG_HIGHPTE));
+ return ptlock_ptr(virt_to_ptdesc(pte));
Hi David,
Hi!
Small question: ptep_lockptr() does not handle the case where the size
of the PTE table is larger than PAGE_SIZE, but pmd_lockptr() does.
I thought I convinced myself that leaf page tables are always single
pages and had a comment in v1.
But now I have to double-check again, and staring at
pagetable_pte_ctor() callers I am left confused.
It certainly sounds more future proof to just align the pointer down to
the start of the PTE table like pmd_lockptr() would.
IIUC, for pte_lockptr() and ptep_lockptr() to return the same result
in this case, ptep_lockptr() should be doing the masking that
pmd_lockptr() is doing. Are you sure that you don't need to be doing
it? (Or maybe I am misunderstanding something.)
It's a valid concern even if it would not be required. But I'm afraid I
won't dig into the details and simply do the alignment in a v3.
To be precise, the following on top:
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 1b1f40ff00b7d..f6c7fe8f5746f 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2926,10 +2926,22 @@ static inline spinlock_t *pte_lockptr(struct mm_struct *mm, pmd_t *pmd)
return ptlock_ptr(page_ptdesc(pmd_page(*pmd)));
}
-static inline spinlock_t *ptep_lockptr(struct mm_struct *mm, pte_t *pte)
+static inline struct page *ptep_pgtable_page(pte_t *pte)
{
+ unsigned long mask = ~(PTRS_PER_PTE * sizeof(pte_t) - 1);
+
BUILD_BUG_ON(IS_ENABLED(CONFIG_HIGHPTE));
- return ptlock_ptr(virt_to_ptdesc(pte));
+ return virt_to_page((void *)((unsigned long)pte & mask));
+}
+
+static inline struct ptdesc *ptep_ptdesc(pte_t *pte)
+{
+ return page_ptdesc(ptep_pgtable_page(pte));
+}
+
+static inline spinlock_t *ptep_lockptr(struct mm_struct *mm, pte_t *pte)
+{
+ return ptlock_ptr(ptep_ptdesc(pte));
}
virt_to_ptdesc() really is of limited use in core-mm code as it seems ...
--
Cheers,
David / dhildenb