Re: [PATCH 5/9] blk-mq: don't set data->ctx and data->hctx in blk_mq_alloc_request_hctx

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, May 18, 2020 at 10:32:22AM +0200, Thomas Gleixner wrote:
> Christoph Hellwig <hch@xxxxxx> writes:
> > diff --git a/block/blk-mq.c b/block/blk-mq.c
> > index fcfce666457e2..540b5845cd1d3 100644
> > --- a/block/blk-mq.c
> > +++ b/block/blk-mq.c
> > @@ -386,6 +386,20 @@ static struct request *__blk_mq_alloc_request(struct blk_mq_alloc_data *data)
> >  	return rq;
> >  }
> >  
> > +static void __blk_mq_alloc_request_cb(void *__data)
> > +{
> > +	struct blk_mq_alloc_data *data = __data;
> > +
> > +	data->rq = __blk_mq_alloc_request(data);
> > +}
> > +
> > +static struct request *__blk_mq_alloc_request_on_cpumask(const cpumask_t *mask,
> > +		struct blk_mq_alloc_data *data)
> > +{
> > +	smp_call_function_any(mask, __blk_mq_alloc_request_cb, data, 1);
> > +	return data->rq;
> > +}
> 
> Is this absolutely necessary to be a smp function call? That's going to

I think it is.

Request is bound to the allocation CPU and the hw queue(hctx) which is
mapped from the allocation CPU.

If request is allocated from one cpu which is going to offline, we can't
handle that easily.

> be problematic vs. RT. Same applies to the explicit preempt_disable() in
> patch 7.

I think it is true and the reason is same too, but the period is quite short,
and it is just taken for iterating several bitmaps for finding one free bit.



thanks,
Ming




[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux