Re: [PATCH V5 0/6] blk-mq: improvement CPU hotplug

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jan 31, 2020 at 6:24 PM John Garry <john.garry@xxxxxxxxxx> wrote:
>
> >> [  141.976109] Call trace:
> >> [  141.978550]  __switch_to+0xbc/0x218
> >> [  141.982029]  blk_mq_run_work_fn+0x1c/0x28
> >> [  141.986027]  process_one_work+0x1e0/0x358
> >> [  141.990025]  worker_thread+0x40/0x488
> >> [  141.993678]  kthread+0x118/0x120
> >> [  141.996897]  ret_from_fork+0x10/0x18
> >
> > Hi John,
> >
> > Thanks for your test!
> >
>
> Hi Ming,
>
> > Could you test the following patchset and only the last one is changed?
> >
> > https://github.com/ming1/linux/commits/my_for_5.6_block
>
> For SCSI testing, I will ask my colleague Xiang Chen to test when he
> returns to work. So I did not see this issue for my SCSI testing for
> your original v5, but I was only using 1x as opposed to maybe 20x SAS disks.
>
> BTW, did you test NVMe? For some reason I could not trigger a scenario
> where we're draining the outstanding requests for a queue which is being
> deactivated - I mean, the queues were always already quiesced.

I run cpu hotplug test on both NVMe and SCSI in KVM, and fio just runs
as expected.

NVMe is often 1:1 mapping, so it might be a bit difficult to trigger
draining in-flight IOs.

Thanks,
Ming Lei



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux