RE: [PATCH] blk-mq: update hctx->cpumask at cpu-hotplug(Internet mail)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Ming,

> -----Original Message-----
> From: Ming Lei <ming.lei@xxxxxxxxxx>
> Sent: Tuesday, June 25, 2019 10:27 AM
> To: wenbinzeng(曾文斌) <wenbinzeng@xxxxxxxxxxx>
> Cc: Wenbin Zeng <wenbin.zeng@xxxxxxxxx>; axboe@xxxxxxxxx; keith.busch@xxxxxxxxx;
> hare@xxxxxxxx; osandov@xxxxxx; sagi@xxxxxxxxxxx; bvanassche@xxxxxxx;
> linux-block@xxxxxxxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx
> Subject: Re: [PATCH] blk-mq: update hctx->cpumask at cpu-hotplug(Internet mail)
> 
> On Tue, Jun 25, 2019 at 02:14:46AM +0000, wenbinzeng(曾文斌) wrote:
> > Hi Ming,
> >
> > > -----Original Message-----
> > > From: Ming Lei <ming.lei@xxxxxxxxxx>
> > > Sent: Tuesday, June 25, 2019 9:55 AM
> > > To: Wenbin Zeng <wenbin.zeng@xxxxxxxxx>
> > > Cc: axboe@xxxxxxxxx; keith.busch@xxxxxxxxx; hare@xxxxxxxx;
> > > osandov@xxxxxx; sagi@xxxxxxxxxxx; bvanassche@xxxxxxx;
> > > linux-block@xxxxxxxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx;
> > > wenbinzeng(曾文斌) <wenbinzeng@xxxxxxxxxxx>
> > > Subject: Re: [PATCH] blk-mq: update hctx->cpumask at
> > > cpu-hotplug(Internet mail)
> > >
> > > On Mon, Jun 24, 2019 at 11:24:07PM +0800, Wenbin Zeng wrote:
> > > > Currently hctx->cpumask is not updated when hot-plugging new cpus,
> > > > as there are many chances kblockd_mod_delayed_work_on() getting
> > > > called with WORK_CPU_UNBOUND, workqueue blk_mq_run_work_fn may run
> > >
> > > There are only two cases in which WORK_CPU_UNBOUND is applied:
> > >
> > > 1) single hw queue
> > >
> > > 2) multiple hw queue, and all CPUs in this hctx become offline
> > >
> > > For 1), all CPUs can be found in hctx->cpumask.
> > >
> > > > on the newly-plugged cpus, consequently __blk_mq_run_hw_queue()
> > > > reporting excessive "run queue from wrong CPU" messages because
> > > > cpumask_test_cpu(raw_smp_processor_id(), hctx->cpumask) returns false.
> > >
> > > The message means CPU hotplug race is triggered.
> > >
> > > Yeah, there is big problem in blk_mq_hctx_notify_dead() which is
> > > called after one CPU is dead, but still run this hw queue to
> > > dispatch request, and all CPUs in this hctx might become offline.
> > >
> > > We have some discussion before on this issue:
> > >
> > > https://lore.kernel.org/linux-block/CACVXFVN729SgFQGUgmu1iN7P6Mv5+pu
> > > E78STz8hj
> > > 9J5bS828Ng@xxxxxxxxxxxxxx/
> > >
> >
> > There is another scenario, you can reproduce it by hot-plugging cpus to kvm guests
> via qemu monitor (I believe virsh setvcpus --live can do the same thing), for example:
> > (qemu) cpu-add 1
> > (qemu) cpu-add 2
> > (qemu) cpu-add 3
> >
> > In such scenario, cpu 1, 2 and 3 are not visible at boot, hctx->cpumask doesn't
> get synced when these cpus are added.
> 
> It is CPU cold-plug, we suppose to support it.
> 
> The new added CPUs should be visible to hctx, since we spread queues among all
> possible CPUs(), please see blk_mq_map_queues() and irq_build_affinity_masks(),
> which is like static allocation on CPU resources.
> 
> Otherwise, you might use an old kernel or there is bug somewhere.

It turns out that I was using old kernel, version 4.14, I tested the latest version, it works well as you said. Thank you very much.

> 
> >
> > > >
> > > > This patch added a cpu-hotplug handler into blk-mq, updating
> > > > hctx->cpumask at cpu-hotplug.
> > >
> > > This way isn't correct, hctx->cpumask should be kept as sync with
> > > queue mapping.
> >
> > Please advise what should I do to deal with the above situation? Thanks a lot.
> 
> As I shared in last email, there is one approach discussed, which seems doable.
> 
> Thanks,
> Ming





[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux