On Mon, 25 Nov 2024 16:52:06 -0800 Andrii Nakryiko <andrii@xxxxxxxxxx> wrote: > When vrealloc() reuses already allocated vmap_area, we need to > re-annotate poisoned and unpoisoned portions of underlying memory > according to the new size. What are the consequences of this oversight? When fixing a flaw, please always remember to describe the visible effects of that flaw. > Note, hard-coding KASAN_VMALLOC_PROT_NORMAL might not be exactly > correct, but KASAN flag logic is pretty involved and spread out > throughout __vmalloc_node_range_noprof(), so I'm using the bare minimum > flag here and leaving the rest to mm people to refactor this logic and > reuse it here. > > Fixes: 3ddc2fefe6f3 ("mm: vmalloc: implement vrealloc()") Because a cc:stable might be appropriate here. But without knowing the effects, it's hard to determine this. > --- a/mm/vmalloc.c > +++ b/mm/vmalloc.c > @@ -4093,7 +4093,8 @@ void *vrealloc_noprof(const void *p, size_t size, gfp_t flags) > /* Zero out spare memory. */ > if (want_init_on_alloc(flags)) > memset((void *)p + size, 0, old_size - size); > - > + kasan_poison_vmalloc(p + size, old_size - size); > + kasan_unpoison_vmalloc(p, size, KASAN_VMALLOC_PROT_NORMAL); > return (void *)p; > } >