Re: [PATCH] mm,kfence: decouple kfence from page granularity mapping judgement

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks Marco!

On 2023/3/9 19:09, Marco Elver wrote:
On Thu, 9 Mar 2023 at 12:04, Zhenhua Huang <quic_zhenhuah@xxxxxxxxxxx> wrote:

Thanks Marco.

On 2023/3/9 18:33, Marco Elver wrote:
On Thu, 9 Mar 2023 at 09:05, Zhenhua Huang <quic_zhenhuah@xxxxxxxxxxx> wrote:

Kfence only needs its pool to be mapped as page granularity, previous
judgement was a bit over protected. Decouple it from judgement and do
page granularity mapping for kfence pool only [1].

To implement this, also relocate the kfence pool allocation before the
linear mapping setting up, kfence_alloc_pool is to allocate phys addr,
__kfence_pool is to be set after linear mapping set up.

LINK: [1] https://lore.kernel.org/linux-arm-kernel/1675750519-1064-1-git-send-email-quic_zhenhuah@xxxxxxxxxxx/T/
Suggested-by: Mark Rutland <mark.rutland@xxxxxxx>
Signed-off-by: Zhenhua Huang <quic_zhenhuah@xxxxxxxxxxx>
---
   arch/arm64/mm/mmu.c      | 24 ++++++++++++++++++++++++
   arch/arm64/mm/pageattr.c |  5 ++---
   include/linux/kfence.h   | 10 ++++++++--
   init/main.c              |  1 -
   mm/kfence/core.c         | 18 ++++++++++++++----
   5 files changed, 48 insertions(+), 10 deletions(-)

diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 6f9d889..bd79691 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -24,6 +24,7 @@
   #include <linux/mm.h>
   #include <linux/vmalloc.h>
   #include <linux/set_memory.h>
+#include <linux/kfence.h>

   #include <asm/barrier.h>
   #include <asm/cputype.h>
@@ -532,6 +533,9 @@ static void __init map_mem(pgd_t *pgdp)
          phys_addr_t kernel_end = __pa_symbol(__init_begin);
          phys_addr_t start, end;
          int flags = NO_EXEC_MAPPINGS;
+#ifdef CONFIG_KFENCE
+       phys_addr_t kfence_pool = 0;
+#endif
          u64 i;

          /*
@@ -564,6 +568,12 @@ static void __init map_mem(pgd_t *pgdp)
          }
   #endif

+#ifdef CONFIG_KFENCE
+       kfence_pool = kfence_alloc_pool();
+       if (kfence_pool)
+               memblock_mark_nomap(kfence_pool, KFENCE_POOL_SIZE);
+#endif
+
          /* map all the memory banks */
          for_each_mem_range(i, &start, &end) {
                  if (start >= end)
@@ -608,6 +618,20 @@ static void __init map_mem(pgd_t *pgdp)
                  }
          }
   #endif
+
+       /* Kfence pool needs page-level mapping */
+#ifdef CONFIG_KFENCE
+       if (kfence_pool) {
+               __map_memblock(pgdp, kfence_pool,
+                       kfence_pool + KFENCE_POOL_SIZE,
+                       pgprot_tagged(PAGE_KERNEL),
+                       NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS);
+               memblock_clear_nomap(kfence_pool, KFENCE_POOL_SIZE);
+               /* kfence_pool really mapped now */
+               kfence_set_pool(kfence_pool);
+       }
+#endif
+
   }

   void mark_rodata_ro(void)
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index 79dd201..61156d0 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -22,12 +22,11 @@ bool rodata_full __ro_after_init = IS_ENABLED(CONFIG_RODATA_FULL_DEFAULT_ENABLED
   bool can_set_direct_map(void)
   {
          /*
-        * rodata_full, DEBUG_PAGEALLOC and KFENCE require linear map to be
+        * rodata_full and DEBUG_PAGEALLOC require linear map to be
           * mapped at page granularity, so that it is possible to
           * protect/unprotect single pages.
           */
-       return (rodata_enabled && rodata_full) || debug_pagealloc_enabled() ||
-               IS_ENABLED(CONFIG_KFENCE);
+       return (rodata_enabled && rodata_full) || debug_pagealloc_enabled();
   }

   static int change_page_range(pte_t *ptep, unsigned long addr, void *data)
diff --git a/include/linux/kfence.h b/include/linux/kfence.h
index 726857a..0252e74 100644
--- a/include/linux/kfence.h
+++ b/include/linux/kfence.h
@@ -61,7 +61,12 @@ static __always_inline bool is_kfence_address(const void *addr)
   /**
    * kfence_alloc_pool() - allocate the KFENCE pool via memblock
    */
-void __init kfence_alloc_pool(void);
+phys_addr_t __init kfence_alloc_pool(void);
+
+/**
+ * kfence_set_pool() - KFENCE pool mapped and can be used
+ */
+void __init kfence_set_pool(phys_addr_t addr);

   /**
    * kfence_init() - perform KFENCE initialization at boot time
@@ -223,7 +228,8 @@ bool __kfence_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *sla
   #else /* CONFIG_KFENCE */

   static inline bool is_kfence_address(const void *addr) { return false; }
-static inline void kfence_alloc_pool(void) { }
+static inline phys_addr_t kfence_alloc_pool(void) { return (phys_addr_t)NULL; }
+static inline void kfence_set_pool(phys_addr_t addr) { }
   static inline void kfence_init(void) { }
   static inline void kfence_shutdown_cache(struct kmem_cache *s) { }
   static inline void *kfence_alloc(struct kmem_cache *s, size_t size, gfp_t flags) { return NULL; }
diff --git a/init/main.c b/init/main.c
index 4425d17..9aaf217 100644
--- a/init/main.c
+++ b/init/main.c
@@ -839,7 +839,6 @@ static void __init mm_init(void)
           */
          page_ext_init_flatmem();
          init_mem_debugging_and_hardening();
-       kfence_alloc_pool();

This breaks other architectures.

Nice catch. Thanks!


          report_meminit();
          kmsan_init_shadow();
          stack_depot_early_init();
diff --git a/mm/kfence/core.c b/mm/kfence/core.c
index 5349c37..dd5cdd5 100644
--- a/mm/kfence/core.c
+++ b/mm/kfence/core.c
@@ -809,15 +809,25 @@ static void toggle_allocation_gate(struct work_struct *work)

   /* === Public interface ===================================================== */

-void __init kfence_alloc_pool(void)
+phys_addr_t __init kfence_alloc_pool(void)
   {

You could just return here:

    if (__kfence_pool)
      return; /* Initialized earlier by arch init code. */

Yeah.


... and see my comments below.

+       phys_addr_t kfence_pool;
          if (!kfence_sample_interval)
-               return;
+               return 0;

-       __kfence_pool = memblock_alloc(KFENCE_POOL_SIZE, PAGE_SIZE);
+       kfence_pool = memblock_phys_alloc(KFENCE_POOL_SIZE, PAGE_SIZE);

-       if (!__kfence_pool)
+       if (!kfence_pool) {
                  pr_err("failed to allocate pool\n");
+               return 0;
+       }
+
+       return kfence_pool;
+}
+
+void __init kfence_set_pool(phys_addr_t addr)
+{
+       __kfence_pool = phys_to_virt(addr);
   }

I would suggest leaving kfence_alloc_pool() to return nothing (with
the addition above), and just set __kfence_pool as before.
__kfence_pool itself is exported by include/linux/kfence.h, so if you
call kfence_alloc_pool() in arm64 earlier, you can access
__kfence_pool to get the allocated pool.

Shall we add one new function like arm64_kfence_alloc_pool() ? The
reason is linear mapping at that time not set up and we must alloc phys
addr based on memblock. We can't use common kfence_alloc_pool()..

Ah right - well, you can initialize __kfence_pool however you like
within arm64 init code. Just teaching kfence_alloc_pool() to do
nothing if it's already initialized should be enough. Within
arch/arm64/mm/mmu.c it might be nice to factor out some bits into a
helper like arm64_kfence_alloc_pool(), but would just stick to
whatever is simplest.

Many thanks Marco. Let me conclude as following:
1. put arm64_kfence_alloc_pool() within arch/arm64/mm/mmu.c as it's arch_ specific codes. 2. leave kfence_set_pool() to set _kfence_pool within kfence driver, as it may become common part.

The reason we still need #2 is because _kfence_pool only can be used after mapping set up, it must be late than pool allocation. Do you have any further suggestion?


Thanks,
-- Marco




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux