On Wed, Feb 2, 2022 at 9:14 PM Hou Tao <hotforest@xxxxxxxxx> wrote: > > Hi, > > > On 2/2/22 7:01 AM, Hou Tao wrote: > > > After commit 2fd3fb0be1d1 ("kasan, vmalloc: unpoison VM_ALLOC pages > > > after mapping"), non-VM_ALLOC mappings will be marked as accessible > > > in __get_vm_area_node() when KASAN is enabled. But now the flag for > > > ringbuf area is VM_ALLOC, so KASAN will complain out-of-bound access > > > after vmap() returns. Because the ringbuf area is created by mapping > > > allocated pages, so use VM_MAP instead. > > > > > > After the change, info in /proc/vmallocinfo also changes from > > > [start]-[end] 24576 ringbuf_map_alloc+0x171/0x290 vmalloc user > > > to > > > [start]-[end] 24576 ringbuf_map_alloc+0x171/0x290 vmap user > > > > > > Reported-by: syzbot+5ad567a418794b9b5983@xxxxxxxxxxxxxxxxxxxxxxxxx > > > Signed-off-by: Hou Tao <houtao1@xxxxxxxxxx> > > > --- > > > v2: > > > * explain why VM_ALLOC will lead to vmalloc-oob access > > > > Do you know which tree commit 2fd3fb0be1d1 is, looks like it's neither > > in bpf nor in bpf-next tree at the moment. > > > It is on linux-next tree: > > $ git name-rev 2fd3fb0be1d1 > 2fd3fb0be1d1 tags/next-20220201~2^2~96 > > > Either way, I presume this fix should be routed via bpf tree rather > > than bpf-next? (I can add Fixes tag while applying.) > > > Make sense and thanks for that. Added Fixes: 457f44363a88 ("bpf: Implement BPF ring buffer and verifier support for it") and pushed to bpf tree, thanks. > > Regards, > Tao > > > > * add Reported-by tag > > > v1: https://lore.kernel.org/bpf/CANUnq3a+sT_qtO1wNQ3GnLGN7FLvSSgvit2UVgqQKRpUvs85VQ@xxxxxxxxxxxxxx/T/#t > > > --- > > > kernel/bpf/ringbuf.c | 2 +- > > > 1 file changed, 1 insertion(+), 1 deletion(-) > > > > > > diff --git a/kernel/bpf/ringbuf.c b/kernel/bpf/ringbuf.c > > > index 638d7fd7b375..710ba9de12ce 100644 > > > --- a/kernel/bpf/ringbuf.c > > > +++ b/kernel/bpf/ringbuf.c > > > @@ -104,7 +104,7 @@ static struct bpf_ringbuf *bpf_ringbuf_area_alloc(size_t data_sz, int numa_node) > > > } > > > > > > rb = vmap(pages, nr_meta_pages + 2 * nr_data_pages, > > > - VM_ALLOC | VM_USERMAP, PAGE_KERNEL); > > > + VM_MAP | VM_USERMAP, PAGE_KERNEL); > > > if (rb) { > > > kmemleak_not_leak(pages); > > > rb->pages = pages; > > > > > > >