On 11/17/22 05:58, Marco Elver wrote:
> [ 0.663761] WARNING: CPU: 0 PID: 0 at arch/x86/include/asm/kfence.h:46 kfence_protect+0x7b/0x120
> [ 0.664033] WARNING: CPU: 0 PID: 0 at mm/kfence/core.c:234 kfence_protect+0x7d/0x120
> [ 0.664465] kfence: kfence_init failed
Any chance you could add some debugging and figure out what actually
made kfence call over? Was it the pte or the level?
if (WARN_ON(!pte || level != PG_LEVEL_4K))
return false;
I can see how the thing you bisected to might lead to a page table not
being split, which could mess with the 'level' check.
Also, is there a reason this code is mucking with the page tables
directly? It seems, uh, rather wonky. This, for instance:
> if (protect)
> set_pte(pte, __pte(pte_val(*pte) & ~_PAGE_PRESENT));
> else
> set_pte(pte, __pte(pte_val(*pte) | _PAGE_PRESENT));
>
> /*
> * Flush this CPU's TLB, assuming whoever did the allocation/free is
> * likely to continue running on this CPU.
> */
> preempt_disable();
> flush_tlb_one_kernel(addr);
> preempt_enable();
Seems rather broken. I assume the preempt_disable() is there to get rid
of some warnings. But, there is nothing I can see to *keep* the CPU
that did the free from being different from the one where the TLB flush
is performed until the preempt_disable(). That makes the
flush_tlb_one_kernel() mostly useless.
Is there a reason this code isn't using the existing page table
manipulation functions and tries to code its own? What prevents it from
using something like the attached patch?
diff --git a/arch/x86/include/asm/kfence.h b/arch/x86/include/asm/kfence.h
index ff5c7134a37a..5cdb3a1f3995 100644
--- a/arch/x86/include/asm/kfence.h
+++ b/arch/x86/include/asm/kfence.h
@@ -37,34 +37,13 @@ static inline bool arch_kfence_init_pool(void)
return true;
}
-/* Protect the given page and flush TLB. */
static inline bool kfence_protect_page(unsigned long addr, bool protect)
{
- unsigned int level;
- pte_t *pte = lookup_address(addr, &level);
-
- if (WARN_ON(!pte || level != PG_LEVEL_4K))
- return false;
-
- /*
- * We need to avoid IPIs, as we may get KFENCE allocations or faults
- * with interrupts disabled. Therefore, the below is best-effort, and
- * does not flush TLBs on all CPUs. We can tolerate some inaccuracy;
- * lazy fault handling takes care of faults after the page is PRESENT.
- */
-
if (protect)
- set_pte(pte, __pte(pte_val(*pte) & ~_PAGE_PRESENT));
+ set_memory_np(addr, addr + PAGE_SIZE);
else
- set_pte(pte, __pte(pte_val(*pte) | _PAGE_PRESENT));
+ set_memory_p(addr, addr + PAGE_SIZE);
- /*
- * Flush this CPU's TLB, assuming whoever did the allocation/free is
- * likely to continue running on this CPU.
- */
- preempt_disable();
- flush_tlb_one_kernel(addr);
- preempt_enable();
return true;
}