Hi Marco and Dmitry, any comments about the following replay,
thanks.
On 2021/7/6 12:07, Kefeng Wang wrote:
Hi Marco and Dmitry,
On 2021/7/5 23:04, Marco Elver wrote:
On Mon, Jul 05, 2021 at 07:14PM +0800, Kefeng Wang wrote:
[...]
+#ifdef CONFIG_KASAN_VMALLOC
+void __init __weak kasan_populate_early_vm_area_shadow(void *start,
+ unsigned long size)
This should probably not be __weak, otherwise you now have 2 __weak
functions.
Indeed, forget it.
+{
+ unsigned long shadow_start, shadow_end;
+
+ if (!is_vmalloc_or_module_addr(start))
+ return;
+
+ shadow_start = (unsigned long)kasan_mem_to_shadow(start);
+ shadow_start = ALIGN_DOWN(shadow_start, PAGE_SIZE);
+ shadow_end = (unsigned long)kasan_mem_to_shadow(start + size);
+ shadow_end = ALIGN(shadow_end, PAGE_SIZE);
+ kasan_map_populate(shadow_start, shadow_end,
+ early_pfn_to_nid(virt_to_pfn(start)));
+}
+#endif
This function looks quite generic -- would any of this also apply to
other architectures? I see that ppc and sparc at least also define
CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK.
I can't try ppc/sparc, but only ppc support KASAN_VMALLOC,
I check the x86, it supports
CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK,
looks this issue is existing on x86 and ppc.
void __init kasan_init(void)
{
kasan_init_shadow();
diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index 5310e217bd74..79d3895b0240 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -49,6 +49,8 @@ extern p4d_t kasan_early_shadow_p4d[MAX_PTRS_PER_P4D];
int kasan_populate_early_shadow(const void *shadow_start,
const void *shadow_end);
+void kasan_populate_early_vm_area_shadow(void *start, unsigned long size);
+
static inline void *kasan_mem_to_shadow(const void *addr)
{
return (void *)((unsigned long)addr >> KASAN_SHADOW_SCALE_SHIFT)
diff --git a/mm/kasan/init.c b/mm/kasan/init.c
index cc64ed6858c6..d39577d088a1 100644
--- a/mm/kasan/init.c
+++ b/mm/kasan/init.c
@@ -279,6 +279,11 @@ int __ref kasan_populate_early_shadow(const void *shadow_start,
return 0;
}
+void __init __weak kasan_populate_early_vm_area_shadow(void *start,
+ unsigned long size)
+{
+}
I'm just wondering if this could be a generic function, perhaps with an
appropriate IS_ENABLED() check of a generic Kconfig option
(CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK ?) to short-circuit it, if it's
not only an arm64 problem.
kasan_map_populate() is arm64 special function, and the x86 has kasan_shallow_populate_pgds(),
ppc has kasan_init_shadow_page_tables(), so look those ARCHs should do the same way like ARM64,
Here we can't use kasan_populate_early_shadow(), this functions will make the early shadow maps
everything to a single page of zeroes(kasan_early_shadow_page), and set it pte_wrprotect, see
zero_pte_populate(), right?
Also I try this, it crashs on ARM64 when change kasan_map_populate() to kasan_populate_early_shadow(),
Unable to handle kernel write to read-only memory at virtual address ffff700002938000
...
Call trace:
__memset+0x16c/0x1c0
kasan_unpoison+0x34/0x6c
kasan_unpoison_vmalloc+0x2c/0x3c
__get_vm_area_node.constprop.0+0x13c/0x240
__vmalloc_node_range+0xf4/0x4f0
__vmalloc_node+0x80/0x9c
init_IRQ+0xe8/0x130
start_kernel+0x188/0x360
__primary_switched+0xc0/0xc8
But I haven't looked much further, so would appeal to you to either
confirm or reject this idea.
Thanks,
-- Marco
.
|