Re: [PATCH 4/4] blk-mq: allocate tags in batches

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12/30/19 8:53 PM, Jens Axboe wrote:
> On 12/30/19 7:18 PM, Ming Lei wrote:
>>> +static int blk_mq_get_tag_batch(struct blk_mq_alloc_data *data)
>>> +{
>>> +	struct blk_mq_tags *tags = blk_mq_tags_from_data(data);
>>> +	struct sbitmap_queue *bt = &tags->bitmap_tags;
>>> +	struct blk_mq_ctx *ctx = data->ctx;
>>> +	int tag, cpu;
>>> +
>>> +	if (!ctx)
>>> +		return -1;
>>> +
>>> +	preempt_disable();
>>> +
>>> +	/* bad luck if we got preempted coming in here, should be rare */
>>> +	cpu = smp_processor_id();
>>> +	if (unlikely(ctx->cpu != cpu)) {
>>> +		ctx = data->ctx = __blk_mq_get_ctx(data->q, cpu);
>>> +		data->hctx = blk_mq_map_queue(data->q, data->cmd_flags, ctx);
>>
>> When blk_mq_get_tag_batch is called for getting driver tag, rq->mq_hctx
>> has been bound to the current request in blk_mq_rq_ctx_init(), so looks
>> we shouldn't change hctx here.
> 
> ->ctx is also NULL for that case, so it gets skipped. Probably needs a
> comment...
> 
> I'll need to check if all cases are covered, the batched tags should be
> exclusive to the non-reserved, regular tags. Or just the scheduler tags,
> if a scheduler is attached. Not sure it makes a lot of sense to handle
> the scheduler case as well (and have two sets of batches, driver tags
> and scheduler tags), but if it does, we could extend it for that and
> just bail if we're not on the right ctx anymore. In all honesty, I wrote
> that above code without ever checking that it happens. It most certainly
> could, but I'd consider it a very low risk. So might be better to simply
> just return -1 for that case and ignore it.

Something like this should it clearer and get rid of that special rare
case of having been preempted on to a different CPU since we got the
ctx assignment.

We could also just not care, I added the lock grab later on since we
need it for flushing out batches. But we don't really need it on the
allocation side if we just have preemption disabled, and we don't
allocate requests from irq context. The lock is cheap enough since we're
in that cacheline anyway, and it should never be contended.

https://git.kernel.dk/cgit/linux-block/commit/?h=for-5.6/block-test&id=d33686a37ca1e902271588b40e80d4b5f82640ce

-- 
Jens Axboe




[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux