On Thu, Aug 15, 2019 at 07:57:49AM -0700, Matthew Wilcox wrote: > On Thu, Aug 15, 2019 at 10:18:42AM -0400, Joel Fernandes (Google) wrote: > > list_for_each_entry_rcu now has support to check for RCU reader sections > > as well as lock. Just use the support in it, instead of explicitly > > checking in the caller. > > ... > > > #define assert_rcu_or_wq_mutex_or_pool_mutex(wq) \ > > RCU_LOCKDEP_WARN(!rcu_read_lock_held() && \ > > !lockdep_is_held(&wq->mutex) && \ > > Can't you also get rid of this macro? Could be. But that should be a different patch. I am only cleaning up the RCU list lockdep checking in this series since the series introduces that concept). Please feel free to send a patch for the same. Arguably, keeping the macro around also can be beneficial in the future. > It's used in one place: > > static struct pool_workqueue *unbound_pwq_by_node(struct workqueue_struct *wq, > int node) > { > assert_rcu_or_wq_mutex_or_pool_mutex(wq); > > /* > * XXX: @node can be NUMA_NO_NODE if CPU goes offline while a > * delayed item is pending. The plan is to keep CPU -> NODE > * mapping valid and stable across CPU on/offlines. Once that > * happens, this workaround can be removed. > */ > if (unlikely(node == NUMA_NO_NODE)) > return wq->dfl_pwq; > > return rcu_dereference_raw(wq->numa_pwq_tbl[node]); > } > > Shouldn't we delete that assert and use > > + return rcu_dereference_check(wq->numa_pwq_tbl[node], > + lockdep_is_held(&wq->mutex) || > + lockdep_is_held(&wq_pool_mutex)); Makes sense. This API also does sparse checking. Also hopefully no sparse issues show up because rcu_dereference_check() but anyone such issues should be fixed as well. thanks, - Joel >