On Sat, Sep 17, 2022 at 04:42:00PM +0000, Joel Fernandes (Google) wrote: > @@ -2809,17 +2825,15 @@ void call_rcu(struct rcu_head *head, rcu_callback_t func) > } > > check_cb_ovld(rdp); > - if (rcu_nocb_try_bypass(rdp, head, &was_alldone, flags)) > + > + if (rcu_nocb_try_bypass(rdp, head, &was_alldone, flags)) { > + __trace_rcu_callback(head, rdp); > return; // Enqueued onto ->nocb_bypass, so just leave. > + } I think the bypass enqueues should be treated differently. Either with extending the current trace_rcu_callback/trace_rcu_kvfree_callback (might break tools) or with creating a new trace_rcu_callback_bypass()/trace_rcu_kvfree_callback_bypass(). Those could later be paired with a trace_rcu_bypass_flush(). Thanks. > + > // If no-CBs CPU gets here, rcu_nocb_try_bypass() acquired ->nocb_lock. > rcu_segcblist_enqueue(&rdp->cblist, head); > - if (__is_kvfree_rcu_offset((unsigned long)func)) > - trace_rcu_kvfree_callback(rcu_state.name, head, > - (unsigned long)func, > - rcu_segcblist_n_cbs(&rdp->cblist)); > - else > - trace_rcu_callback(rcu_state.name, head, > - rcu_segcblist_n_cbs(&rdp->cblist)); > + __trace_rcu_callback(head, rdp); > > trace_rcu_segcb_stats(&rdp->cblist, TPS("SegCBQueued")); > > -- > 2.37.3.968.ga6b4b080e4-goog >