On 10/13/21 11:17 PM, Christoph Hellwig wrote: > On Tue, Oct 12, 2021 at 11:15:25AM -0600, Jens Axboe wrote: >> >> +unsigned long blk_mq_get_tags(struct blk_mq_alloc_data *data, int nr_tags, >> + unsigned int *offset) >> +{ >> + struct blk_mq_tags *tags = blk_mq_tags_from_data(data); >> + struct sbitmap_queue *bt = &tags->bitmap_tags; >> + unsigned long ret; >> + >> + if (data->shallow_depth ||data->flags & BLK_MQ_REQ_RESERVED || > > Missing whitespace after the first ||. Fixed. >> + if (data->nr_tags > 1) { >> + unsigned long tags; >> + unsigned int tag_offset; >> + int i, nr = 0; >> + >> + tags = blk_mq_get_tags(data, data->nr_tags, &tag_offset); >> + if (unlikely(!tags)) { >> + data->nr_tags = 1; >> + goto retry; > > Unless I'm missing something we don't want the retry case that maps > the contexts against and calls blk_mq_tag_busy again. Yes and no, we could've been preempted and we're now on a different CPU. It's not a huge deal or likely outcome, and it'll only hurt efficiency a bit if that's the case. Can be skipped. >> + } >> + for (i = 0; tags; i++) { >> + if (!(tags & (1UL << i))) >> + continue; >> + tag = tag_offset + i; >> + tags &= ~(1UL << i); >> + rq = blk_mq_rq_ctx_init(data, tag, alloc_time_ns); >> + rq->rq_next = *data->cached_rq; >> + *data->cached_rq = rq; >> + } >> + data->nr_tags -= nr; > > And keeping all this batch logic in a helper (even if inlined) would > simplify the code a lot. Something like this untested patch: Yeah, I did go back and forth on that, it's small enough that I didn't care too much about it, but an inline helper is fine too. I can fold this in. -- Jens Axboe