On Wed, Aug 28, 2019 at 05:43:20PM -0400, Joel Fernandes wrote: > On Wed, Aug 28, 2019 at 02:31:19PM -0700, Paul E. McKenney wrote: > > On Tue, Aug 27, 2019 at 03:01:57PM -0400, Joel Fernandes (Google) wrote: > > > Make use of RCU's debug_objects debugging support > > > (CONFIG_DEBUG_OBJECTS_RCU_HEAD) similar to call_rcu() and other flavors. > > > > Other flavors? Ah, call_srcu(), rcu_barrier(), and srcu_barrier(), > > right? > > Yes. > > > > We queue the object during the kfree_rcu() call and dequeue it during > > > reclaim. > > > > > > Tested that enabling CONFIG_DEBUG_OBJECTS_RCU_HEAD successfully detects > > > double kfree_rcu() calls. > > > > > > Signed-off-by: Joel Fernandes (Google) <joel@xxxxxxxxxxxxxxxxx> > > > > The code looks good! > > thanks, does that mean you'll ack/apply it? :-P Is it independent of 1/5 and 2/5? Thanx, Paul > - Joel > > > > > Thanx, Paul > > > > > --- > > > kernel/rcu/tree.c | 8 ++++++++ > > > 1 file changed, 8 insertions(+) > > > > > > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c > > > index 9b9ae4db1c2d..64568f12641d 100644 > > > --- a/kernel/rcu/tree.c > > > +++ b/kernel/rcu/tree.c > > > @@ -2757,6 +2757,7 @@ static void kfree_rcu_work(struct work_struct *work) > > > for (; head; head = next) { > > > next = head->next; > > > /* Could be possible to optimize with kfree_bulk in future */ > > > + debug_rcu_head_unqueue(head); > > > __rcu_reclaim(rcu_state.name, head); > > > cond_resched_tasks_rcu_qs(); > > > } > > > @@ -2868,6 +2869,13 @@ void kfree_call_rcu(struct rcu_head *head, rcu_callback_t func) > > > if (rcu_scheduler_active != RCU_SCHEDULER_RUNNING) > > > return kfree_call_rcu_nobatch(head, func); > > > > > > + if (debug_rcu_head_queue(head)) { > > > + /* Probable double kfree_rcu() */ > > > + WARN_ONCE(1, "kfree_call_rcu(): Double-freed call. rcu_head %p\n", > > > + head); > > > + return; > > > + } > > > + > > > head->func = func; > > > > > > local_irq_save(flags); /* For safely calling this_cpu_ptr(). */ > > > -- > > > 2.23.0.187.g17f5b7556c-goog > > >