On 2023/12/20 14:32, Muchun Song wrote:
On Dec 20, 2023, at 13:18, Nanyong Sun <sunnanyong@xxxxxxxxxx> wrote:
Implement vmemmap_update_pmd and vmemmap_update_pte on arm64 to do
BBM(break-before-make) logic when change the page table of vmemmap
address, they will under the init_mm.page_table_lock.
If a translation fault of vmemmap address concurrently happened after
pte/pmd cleared, vmemmap page fault handler will acquire the
init_mm.page_table_lock to wait for vmemmap update to complete,
by then the virtual address is valid again, so PF can return and
access can continue.
In other case, do the traditional kernel fault.
Implement vmemmap_flush_tlb_all/range on arm64 with nothing
to do because tlb already flushed in every single BBM.
Signed-off-by: Nanyong Sun <sunnanyong@xxxxxxxxxx>
---
arch/arm64/include/asm/esr.h | 4 ++
arch/arm64/include/asm/mmu.h | 20 +++++++++
arch/arm64/mm/fault.c | 78 ++++++++++++++++++++++++++++++++++--
arch/arm64/mm/mmu.c | 28 +++++++++++++
4 files changed, 127 insertions(+), 3 deletions(-)
diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h
index ae35939f395b..1c63256efd25 100644
--- a/arch/arm64/include/asm/esr.h
+++ b/arch/arm64/include/asm/esr.h
@@ -116,6 +116,10 @@
#define ESR_ELx_FSC_SERROR (0x11)
#define ESR_ELx_FSC_ACCESS (0x08)
#define ESR_ELx_FSC_FAULT (0x04)
+#define ESR_ELx_FSC_FAULT_L0 (0x04)
+#define ESR_ELx_FSC_FAULT_L1 (0x05)
+#define ESR_ELx_FSC_FAULT_L2 (0x06)
+#define ESR_ELx_FSC_FAULT_L3 (0x07)
#define ESR_ELx_FSC_PERM (0x0C)
#define ESR_ELx_FSC_SEA_TTW0 (0x14)
#define ESR_ELx_FSC_SEA_TTW1 (0x15)
diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index 2fcf51231d6e..b553bc37c925 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -76,5 +76,25 @@ extern bool kaslr_requires_kpti(void);
#define INIT_MM_CONTEXT(name) \
.pgd = init_pg_dir,
+#ifdef CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP
+void vmemmap_update_pmd(unsigned long addr, pmd_t *pmdp, pte_t *ptep);
+#define vmemmap_update_pmd vmemmap_update_pmd
+void vmemmap_update_pte(unsigned long addr, pte_t *ptep, pte_t pte);
+#define vmemmap_update_pte vmemmap_update_pte
+
+static inline void vmemmap_flush_tlb_all(void)
+{
+ /* do nothing, already flushed tlb in every single BBM */
+}
+#define vmemmap_flush_tlb_all vmemmap_flush_tlb_all
+
+static inline void vmemmap_flush_tlb_range(unsigned long start,
+ unsigned long end)
+{
+ /* do nothing, already flushed tlb in every single BBM */
+}
+#define vmemmap_flush_tlb_range vmemmap_flush_tlb_range
+#endif
I think those declaration related to TLB flushing should be moved
to arch/arm64/include/asm/tlbflush.h since we do not include
<asm/mmu.h> explicitly in hugetlb_vmemmap.c and its functionality
is to flush TLB. Luckily, <asm/tlbflush.h> is included by hugetlb_vmemmap.c.
Additionally, vmemmap_update_pmd/pte helpers should be moved to
arch/arm64/include/asm/pgtable.h since it is really pgtable related
operations.
Thanks.
Yes,I will move them in next version.
Thanks for your time.
.