> On Thu, Feb 04, 2021 at 10:46:12PM +0800, Lecopzer Chen wrote: > > > On Sat, Jan 09, 2021 at 06:32:49PM +0800, Lecopzer Chen wrote: > > > > Linux support KAsan for VMALLOC since commit 3c5c3cfb9ef4da9 > > > > ("kasan: support backing vmalloc space with real shadow memory") > > > > > > > > Like how the MODULES_VADDR does now, just not to early populate > > > > the VMALLOC_START between VMALLOC_END. > > > > similarly, the kernel code mapping is now in the VMALLOC area and > > > > should keep these area populated. > > > > > > > > Signed-off-by: Lecopzer Chen <lecopzer.chen@xxxxxxxxxxxx> > > > > --- > > > > arch/arm64/mm/kasan_init.c | 23 ++++++++++++++++++----- > > > > 1 file changed, 18 insertions(+), 5 deletions(-) > > > > > > > > diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c > > > > index d8e66c78440e..39b218a64279 100644 > > > > --- a/arch/arm64/mm/kasan_init.c > > > > +++ b/arch/arm64/mm/kasan_init.c > > > > @@ -214,6 +214,7 @@ static void __init kasan_init_shadow(void) > > > > { > > > > u64 kimg_shadow_start, kimg_shadow_end; > > > > u64 mod_shadow_start, mod_shadow_end; > > > > + u64 vmalloc_shadow_start, vmalloc_shadow_end; > > > > phys_addr_t pa_start, pa_end; > > > > u64 i; > > > > > > > > @@ -223,6 +224,9 @@ static void __init kasan_init_shadow(void) > > > > mod_shadow_start = (u64)kasan_mem_to_shadow((void *)MODULES_VADDR); > > > > mod_shadow_end = (u64)kasan_mem_to_shadow((void *)MODULES_END); > > > > > > > > + vmalloc_shadow_start = (u64)kasan_mem_to_shadow((void *)VMALLOC_START); > > > > + vmalloc_shadow_end = (u64)kasan_mem_to_shadow((void *)VMALLOC_END); > > > > + > > > > /* > > > > * We are going to perform proper setup of shadow memory. > > > > * At first we should unmap early shadow (clear_pgds() call below). > > > > @@ -241,12 +245,21 @@ static void __init kasan_init_shadow(void) > > > > > > > > kasan_populate_early_shadow(kasan_mem_to_shadow((void *)PAGE_END), > > > > (void *)mod_shadow_start); > > > > - kasan_populate_early_shadow((void *)kimg_shadow_end, > > > > - (void *)KASAN_SHADOW_END); > > > > + if (IS_ENABLED(CONFIG_KASAN_VMALLOC)) { > > > > > > Do we really need yet another CONFIG option for KASAN? What's the use-case > > > for *not* enabling this if you're already enabling one of the KASAN > > > backends? > > > > As I know, KASAN_VMALLOC now only supports KASAN_GENERIC and also > > KASAN_VMALLOC uses more memory to map real shadow memory (1/8 of vmalloc va). > > The shadow is allocated dynamically though, isn't it? Yes, but It's still a cost. > > There should be someone can enable KASAN_GENERIC but can't use VMALLOC > > due to memory issue. > > That doesn't sound particularly realistic to me. The reason I'm pushing here > is because I would _really_ like to move to VMAP stack unconditionally, and > that would effectively force KASAN_VMALLOC to be set if KASAN is in use. > > So unless there's a really good reason not to do that, please can we make > this unconditional for arm64? Pretty please? I think it's fine since we have a good reason. Also if someone have memory issue in KASAN_VMALLOC, they can use SW_TAG, right? However the SW_TAG/HW_TAG is not supported VMALLOC yet. So the code would be like if (IS_ENABLED(CONFIG_KASAN_GENERIC)) /* explain the relationship between * KASAN_GENERIC and KASAN_VMALLOC in arm64 * XXX: because we want VMAP stack.... */ kasan_populate_early_shadow((void *)vmalloc_shadow_end, (void *)KASAN_SHADOW_END); else { kasan_populate_early_shadow((void *)kimg_shadow_end, (void *)KASAN_SHADOW_END); if (kimg_shadow_start > mod_shadow_end) kasan_populate_early_shadow((void *)mod_shadow_end, (void *)kimg_shadow_start); } and the arch/arm64/Kconfig will add select KASAN_VMALLOC if KASAN_GENERIC Is this code same as your thought? BRs, Lecopzer