On Sat, 13 Jan 2024 at 02:24, Andi Kleen <ak@xxxxxxxxxxxxxxx> wrote: > > On Fri, Jan 12, 2024 at 11:15:05PM +0100, Marco Elver wrote: > > + /* > > + * Stack traces of size 0 are never saved, and we can simply use > > + * the size field as an indicator if this is a new unused stack > > + * record in the freelist. > > + */ > > + stack->size = 0; > > I would use WRITE_ONCE here too, at least for TSan. This is written with the pool_lock held. > > + return NULL; > > + > > + /* > > + * We maintain the invariant that the elements in front are least > > + * recently used, and are therefore more likely to be associated with an > > + * RCU grace period in the past. Consequently it is sufficient to only > > + * check the first entry. > > + */ > > + stack = list_first_entry(&free_stacks, struct stack_record, free_list); > > + if (stack->size && !poll_state_synchronize_rcu(stack->rcu_state)) > > READ_ONCE (also for TSan, and might be safer long term in case the > compiler considers some fancy code transformation) And this is also only read with the pool_lock held, so it's impossible that there'd be a data race due to size. (And if there is a data race, I'd want KCSAN to tell us because that'd be a bug then.) depot_pop_free() can't be used w/o the lock because it's manipulating the freelist. To be sure, I'm adding a lockdep_assert_held() to depot_pop_free(). > > + return NULL; > > > > + stack = depot_pop_free(); > > + if (WARN_ON(!stack)) > > Won't you get nesting problems here if this triggers due to the print? > I assume the nmi safe printk won't consider it like an NMI. > > > counters[DEPOT_COUNTER_FREELIST_SIZE]++; > > counters[DEPOT_COUNTER_FREES]++; > > counters[DEPOT_COUNTER_INUSE]--; > > + > > + printk_deferred_exit(); > > Ah this handles the WARN_ON? Should be ok then. Yes, the pool_lock critical sections are surrounded by printk_deferred. Thanks, -- Marco