Re: [PATCH V2 for-4.21 2/2] blk-mq: alloc q->queue_ctx as normal array

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Nov 19, 2018 at 11:17:49AM +0100, Greg Kroah-Hartman wrote:
> On Mon, Nov 19, 2018 at 10:04:27AM +0800, Ming Lei wrote:
> > On Sat, Nov 17, 2018 at 11:03:42AM +0100, Greg Kroah-Hartman wrote:
> > > On Sat, Nov 17, 2018 at 10:34:18AM +0800, Ming Lei wrote:
> > > > On Fri, Nov 16, 2018 at 06:06:23AM -0800, Greg Kroah-Hartman wrote:
> > > > > On Fri, Nov 16, 2018 at 07:23:11PM +0800, Ming Lei wrote:
> > > > > > Now q->queue_ctx is just one read-mostly table for query the
> > > > > > 'blk_mq_ctx' instance from one cpu index, it isn't necessary
> > > > > > to allocate it as percpu variable. One simple array may be
> > > > > > more efficient.
> > > > > 
> > > > > "may be", have you run benchmarks to be sure?  If so, can you add the
> > > > > results of them to this changelog?  If there is no measurable
> > > > > difference, then why make this change at all?
> > > > 
> > > > __blk_mq_get_ctx() is used in fast path, what do you think about which
> > > > one is more efficient?
> > > > 
> > > > - *per_cpu_ptr(q->queue_ctx, cpu);
> > > > 
> > > > - q->queue_ctx[cpu]
> > > 
> > > You need to actually test to see which one is faster, you might be
> > > surprised :)
> > > 
> > > In other words, do not just guess.
> > 
> > No performance difference is observed wrt. this patchset when I
> > run the following fio test on null_blk(modprobe null_blk) in my VM:
> > 
> > fio --direct=1 --size=128G --bsrange=4k-4k --runtime=40 --numjobs=32 \
> >   --ioengine=libaio --iodepth=64 --group_reporting=1 --filename=/dev/nullb0 \
> >   --name=null_blk-ttest-randread --rw=randread
> > 
> > Running test is important, but IMO it is more important to understand
> > the idea behind is correct, or the approach can be proved as correct.
> > 
> > Given the count of test cases can be increased exponentially when the related
> > factors or settings are covered, obviously we can't run all the test cases.
> 
> And what happens when you start to scale the number of queues and cpus
> in the system?

It is related with cpus.

> Does both options work the same?

This patch may introduce one extra memory read for getting one
'blk_mq_ctx' instance.

> Why did the original code have per-cpu variables?

Each instance of 'struct blk_mq_ctx' is actually percpu, so it is natural
to allocate it as one percpu variable.

However, there is the kobject lifetime issue if we allocate all 'blk_mq_ctx'
instances as one single percpu variable, because we can't release just one
part(for one cpu) of the single percpu variable.

So this patch converts the percpu variable into one loop-up table(read
only) and one 'blk_mq_ctx' instance for each CPU, and all the instances
are allocated from the local NUMA node of its CPU.

Another approach is to keep the percpu allocation, and introduces one
reference counter for counting how many active 'ctx' there are, and only
free the percpu variable when the ref drops zero, then we may save one
extra memory read for __blk_mq_get_ctx().


Thanks,
Ming



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux