On 28/11/2019 02:02, Ming Lei wrote:
On Thu, Nov 28, 2019 at 09:09:13AM +0800, chenxiang (M) wrote:
Hi,
在 2019/10/14 9:50, Ming Lei 写道:
Hi,
Thomas mentioned:
"
That was the constraint of managed interrupts from the very beginning:
The driver/subsystem has to quiesce the interrupt line and the associated
queue _before_ it gets shutdown in CPU unplug and not fiddle with it
until it's restarted by the core when the CPU is plugged in again.
"
But no drivers or blk-mq do that before one hctx becomes dead(all
CPUs for one hctx are offline), and even it is worse, blk-mq stills tries
to run hw queue after hctx is dead, see blk_mq_hctx_notify_dead().
This patchset tries to address the issue by two stages:
1) add one new cpuhp state of CPUHP_AP_BLK_MQ_ONLINE
- mark the hctx as internal stopped, and drain all in-flight requests
if the hctx is going to be dead.
2) re-submit IO in the state of CPUHP_BLK_MQ_DEAD after the hctx becomes dead
- steal bios from the request, and resubmit them via generic_make_request(),
then these IO will be mapped to other live hctx for dispatch
Please comment & review, thanks!
John, I don't add your tested-by tag since V3 have some changes,
and I appreciate if you may run your test on V3.
I tested those patchset with John's testcase, except dump_stack() in
function __blk_mq_run_hw_queue() sometimes occurs which don't
affect the function, it solves the CPU hotplug issue, so add tested-by for
those patchset:
Tested-by: Xiang Chen <chenxiang66@xxxxxxxxxxxxx>
Thanks for your test.
So I had to give up testing as my board experienced some SCSI timeout
even without hotplugging or including this patchset.
FWIW, I did test NVMe successfully though.
I plan to post a new version for 5.6 cycle, and there is still some
small race window related with requeue to be covered.
thanks!
Thanks,
Ming
.