On Thu, Jan 09, 2025 at 09:05:49AM +0900, Damien Le Moal wrote: > On 1/9/25 00:27, Christoph Hellwig wrote: > > On Wed, Jan 08, 2025 at 06:31:15PM +0800, Ming Lei wrote: > >>> - if (!(q->limits.features & BLK_FEAT_POLL) && > >>> - (bio->bi_opf & REQ_POLLED)) { > >>> + if ((bio->bi_opf & REQ_POLLED) && !bdev_can_poll(bdev)) { > >> > >> submit_bio_noacct() is called without grabbing .q_usage_counter, > >> so tagset may be freed now, then use-after-free on q->tag_set? > > > > Indeed. That also means the previous check wasn't reliable either. > > I think we can simple move the check into > > blk_mq_submit_bio/__submit_bio which means we'll do a bunch more > > checks before we eventually fail, but otherwise it'll work the > > same. > > Given that the request queue is the same for all tag sets, I do not think we No, it isn't same. > need to have the queue_limits_start_update()/commit_update() within the tag set > loop in __blk_mq_update_nr_hw_queues(). So something like this should be enough > for an initial fix, no ? > > diff --git a/block/blk-mq.c b/block/blk-mq.c > index 8ac19d4ae3c0..ac71e9cee25b 100644 > --- a/block/blk-mq.c > +++ b/block/blk-mq.c > @@ -4986,6 +4986,7 @@ static void __blk_mq_update_nr_hw_queues(struct > blk_mq_tag_set *set, > int nr_hw_queues) > { > struct request_queue *q; > + struct queue_limits lim; > LIST_HEAD(head); > int prev_nr_hw_queues = set->nr_hw_queues; > int i; > @@ -4999,8 +5000,10 @@ static void __blk_mq_update_nr_hw_queues(struct > blk_mq_tag_set *set, > if (set->nr_maps == 1 && nr_hw_queues == set->nr_hw_queues) > return; > > + lim = queue_limits_start_update(q); > list_for_each_entry(q, &set->tag_list, tag_set_list) > blk_mq_freeze_queue(q); It could be worse, since the limits_lock is connected with lots of other subsystem's lock(debugfs, sysfs dir, ...), it may introduce new deadlock risk. Thanks, Ming