On Mon, Jan 30, 2023 at 9:49 PM <andrey.konovalov@xxxxxxxxx> wrote: > > From: Andrey Konovalov <andreyknvl@xxxxxxxxxx> > > In commit 305e519ce48e ("lib/stackdepot.c: fix global out-of-bounds in > stack_slabs"), init_stack_slab was changed to only use preallocated > memory for the next slab if the slab number limit is not reached. > However, setting next_slab_inited was not moved together with updating > stack_slabs. > > Set next_slab_inited only if the preallocated memory was used for the > next slab. > > Fixes: 305e519ce48e ("lib/stackdepot.c: fix global out-of-bounds in stack_slabs") > Signed-off-by: Andrey Konovalov <andreyknvl@xxxxxxxxxx> Wait, I think there's a problem here. > diff --git a/lib/stackdepot.c b/lib/stackdepot.c > index 79e894cf8406..0eed9bbcf23e 100644 > --- a/lib/stackdepot.c > +++ b/lib/stackdepot.c > @@ -105,12 +105,13 @@ static bool init_stack_slab(void **prealloc) > if (depot_index + 1 < STACK_ALLOC_MAX_SLABS) { If we get to this branch, but the condition is false, this means that: - next_slab_inited == 0 - depot_index == STACK_ALLOC_MAX_SLABS+1 - stack_slabs[depot_index] != NULL. So stack_slabs[] is at full capacity, but upon leaving init_stack_slab() we'll always keep next_slab_inited==0. Now every time __stack_depot_save() is called for a known stack trace, it will preallocate 1<<STACK_ALLOC_ORDER pages (because next_slab_inited==0), then find the stack trace id in the hash, then pass the preallocated pages to init_stack_slab(), which will not change the value of next_slab_inited. Then the preallocated pages will be freed, and next time __stack_depot_save() is called they'll be allocated again.