Dexuan reports that he's seeing spikes of very heavy CPU utilization when running 24 disks and using the 'none' scheduler. This happens off the flush path, because SCSI requires the queue to be restarted async, and hence we're hammering on mod_delayed_work_on() to ensure that the work item gets run appropriately. What we care about here is that the queue is run, and we don't need to repeatedly re-arm the timer associated with the delayed work item. If we check if the work item is pending upfront, then we don't really need to do anything else. This is safe as theh work pending bit is cleared before a work item is started. The only potential caveat here is if we have callers with wildly different timeouts specified. That's generally not the case, so don't think we need to care for that case. Reported-by: Dexuan Cui <decui@xxxxxxxxxxxxx> Link: https://lore.kernel.org/linux-block/BYAPR21MB1270C598ED214C0490F47400BF719@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/ Signed-off-by: Jens Axboe <axboe@xxxxxxxxx> --- diff --git a/block/blk-core.c b/block/blk-core.c index 1378d084c770..4584fe709c15 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -1484,7 +1484,16 @@ EXPORT_SYMBOL(kblockd_schedule_work); int kblockd_mod_delayed_work_on(int cpu, struct delayed_work *dwork, unsigned long delay) { - return mod_delayed_work_on(cpu, kblockd_workqueue, dwork, delay); + /* + * Avoid hammering on work addition, if the work item is already + * pending. This is safe the work pending state is cleared before + * the work item is started, so if we see it set, then we know that + * whatever was previously queued on the block side will get run by + * an existing pending work item. + */ + if (!work_pending(&dwork->work)) + return mod_delayed_work_on(cpu, kblockd_workqueue, dwork, delay); + return true; } EXPORT_SYMBOL(kblockd_mod_delayed_work_on); -- Jens Axboe