On Wed, Nov 27, 2024 at 4:58 PM Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> wrote: > > On Mon, 25 Nov 2024 16:52:06 -0800 Andrii Nakryiko <andrii@xxxxxxxxxx> wrote: > > > When vrealloc() reuses already allocated vmap_area, we need to > > re-annotate poisoned and unpoisoned portions of underlying memory > > according to the new size. > > What are the consequences of this oversight? > > When fixing a flaw, please always remember to describe the visible > effects of that flaw. > See [0] for false KASAN splat. I should have left a link to that, sorry. [0] https://lore.kernel.org/bpf/67450f9b.050a0220.21d33d.0004.GAE@xxxxxxxxxx/ > > Note, hard-coding KASAN_VMALLOC_PROT_NORMAL might not be exactly > > correct, but KASAN flag logic is pretty involved and spread out > > throughout __vmalloc_node_range_noprof(), so I'm using the bare minimum > > flag here and leaving the rest to mm people to refactor this logic and > > reuse it here. > > > > Fixes: 3ddc2fefe6f3 ("mm: vmalloc: implement vrealloc()") > > Because a cc:stable might be appropriate here. But without knowing the > effects, it's hard to determine this. This is KASAN-related, so the effect is a KASAN mis-reporting issue where there is none. > > > --- a/mm/vmalloc.c > > +++ b/mm/vmalloc.c > > @@ -4093,7 +4093,8 @@ void *vrealloc_noprof(const void *p, size_t size, gfp_t flags) > > /* Zero out spare memory. */ > > if (want_init_on_alloc(flags)) > > memset((void *)p + size, 0, old_size - size); > > - > > + kasan_poison_vmalloc(p + size, old_size - size); > > + kasan_unpoison_vmalloc(p, size, KASAN_VMALLOC_PROT_NORMAL); > > return (void *)p; > > } > > >