On Wed, Aug 30, 2023 at 10:34 AM Marco Elver <elver@xxxxxxxxxx> wrote: > > > diff --git a/lib/stackdepot.c b/lib/stackdepot.c > > index 93191ee70fc3..9ae71e1ef1a7 100644 > > --- a/lib/stackdepot.c > > +++ b/lib/stackdepot.c > > @@ -226,10 +226,10 @@ static void depot_init_pool(void **prealloc) > > /* > > * If the next pool is already initialized or the maximum number of > > * pools is reached, do not use the preallocated memory. > > - * smp_load_acquire() here pairs with smp_store_release() below and > > - * in depot_alloc_stack(). > > + * READ_ONCE is only used to mark the variable as atomic, > > + * there are no concurrent writes. > > This doesn't make sense. If there are no concurrent writes, we should > drop the marking, so that if there are concurrent writes, tools like > KCSAN can tell us about it if our assumption was wrong. Makes sense, will do in v2. > > @@ -425,8 +424,8 @@ depot_stack_handle_t __stack_depot_save(unsigned long *entries, > > * Check if another stack pool needs to be initialized. If so, allocate > > * the memory now - we won't be able to do that under the lock. > > * > > - * The smp_load_acquire() here pairs with smp_store_release() to > > - * |next_pool_inited| in depot_alloc_stack() and depot_init_pool(). > > + * smp_load_acquire pairs with smp_store_release > > + * in depot_alloc_stack and depot_init_pool. > > Reflow comment to match 80 cols used by comments elsewhere. Will do in v2. Thanks!