On Fri, 2017-04-07 at 17:46 +0800, Ming Lei wrote: > On Thu, Apr 06, 2017 at 11:10:46AM -0700, Bart Van Assche wrote: > > Since the next patch in this series will use RCU to iterate over > > tag_list, make this safe. Add lockdep_assert_held() statements > > in functions that iterate over tag_list to make clear that using > > list_for_each_entry() instead of list_for_each_entry_rcu() is > > fine in these functions. > > > > Signed-off-by: Bart Van Assche <bart.vanassche@xxxxxxxxxxx> > > Cc: Christoph Hellwig <hch@xxxxxx> > > Cc: Hannes Reinecke <hare@xxxxxxxx> > > --- > > block/blk-mq.c | 6 ++++++ > > 1 file changed, 6 insertions(+) > > > > diff --git a/block/blk-mq.c b/block/blk-mq.c > > index f7cd3208bcdf..b5580b09b4a5 100644 > > --- a/block/blk-mq.c > > +++ b/block/blk-mq.c > > @@ -2076,6 +2076,8 @@ static void blk_mq_update_tag_set_depth(struct blk_mq_tag_set *set, bool shared) > > { > > struct request_queue *q; > > > > + lockdep_assert_held(&set->tag_list_lock); > > + > > list_for_each_entry(q, &set->tag_list, tag_set_list) { > > blk_mq_freeze_queue(q); > > queue_set_hctx_shared(q, shared); > > @@ -2096,6 +2098,8 @@ static void blk_mq_del_queue_tag_set(struct request_queue *q) > > blk_mq_update_tag_set_depth(set, false); > > } > > mutex_unlock(&set->tag_list_lock); > > + > > + synchronize_rcu(); > > Looks synchronize_rcu() is only needed in deletion path, so it can > be moved to blk_mq_del_queue_tag_set(). > > Also list_del_init/list_add_tail() need to be replaced with RCU > safe version in functions operating &set->tag_list. Hello Ming, I will replace list_del_init() / list_add_tail() by their RCU equivalents. Regarding synchronize_rcu(): have you noticed that that call has been added to blk_mq_del_queue_tag_set(), the function you requested to move that call to? Bart.