On Thu, Jan 23, 2025 at 11:10:24PM +0530, Nilay Shroff wrote: > > + mutex_lock(&q->tag_set->tag_list_lock); > + > queue_for_each_hw_ctx(q, hctx, i) { > ret = blk_mq_register_hctx(hctx); > - if (ret) > + if (ret) { > + mutex_unlock(&q->tag_set->tag_list_lock); > goto unreg; > + } > } > > + mutex_unlock(&q->tag_set->tag_list_lock); > > out: > return ret; > > unreg: > + mutex_lock(&q->tag_set->tag_list_lock); > + No real need for a unlock/lock cycle here I think. Also as something that is really just a nitpick, I love to keep the locks for the critical sections close to the code they're protecting. e.g. format this as: if (ret < 0) return ret; mutex_lock(&q->tag_set->tag_list_lock); queue_for_each_hw_ctx(q, hctx, i) { ret = blk_mq_register_hctx(hctx); if (ret) goto out_unregister; } mutex_unlock(&q->tag_set->tag_list_lock); return 0 out_unregister: queue_for_each_hw_ctx(q, hctx, j) { if (j < i) blk_mq_unregister_hctx(hctx); } mutex_unlock(&q->tag_set->tag_list_lock); ... (and similar for blk_mq_sysfs_unregister). I assume you did run this through blktests and xfstests with lockdep enabled to catch if we created some new lock ordering problems? I can't think of any right now, but it's good to validate that.