On 05 Oct 2022 12:13:17 +0100 Valentin Schneider <vschneid@xxxxxxxxxx> >On 05/10/22 09:08, Hillf Danton wrote: >> On 4 Oct 2022 16:05:21 +0100 Valentin Schneider <vschneid@xxxxxxxxxx> >>> It has been reported that isolated CPUs can suffer from interference due to >>> per-CPU kworkers waking up just to die. >>> >>> A surge of workqueue activity during initial setup of a latency-sensitive >>> application (refresh_vm_stats() being one of the culprits) can cause extra >>> per-CPU kworkers to be spawned. Then, said latency-sensitive task can be >>> running merrily on an isolated CPU only to be interrupted sometime later by >>> a kworker marked for death (cf. IDLE_WORKER_TIMEOUT, 5 minutes after last >>> kworker activity). >>> >> Is tick stopped on the isolated CPU? If tick can hit it then it can accept >> more than exiting kworker. > >From what I've seen in the scenarios where that happens, yes. The >pool->idle_timer gets queued from an isolated CPU and ends up on a >housekeeping CPU (cf. get_target_base()). Yes, you are right. >With nohz_full on the cmdline, wq_unbound_cpumask already excludes isolated >CPU, but that doesn't apply to per-CPU kworkers. Or did you mean some other >mechanism? Bound kworkers can be destroyed by the idle timer on a housekeeping CPU. Diff is only for thoughts. +++ b/kernel/workqueue.c @@ -1985,6 +1985,7 @@ fail: static void destroy_worker(struct worker *worker) { struct worker_pool *pool = worker->pool; + int cpu = smp_processor_id(); lockdep_assert_held(&pool->lock); @@ -1999,6 +2000,12 @@ static void destroy_worker(struct worker list_del_init(&worker->entry); worker->flags |= WORKER_DIE; + + if (!(pool->flags & POOL_DISASSOCIATED) && pool->cpu != cpu) { + /* send worker to die on a housekeeping cpu */ + cpumask_clear(&worker->task->cpus_mask); + cpumask_set_cpu(cpu, &worker->task->cpus_mask); + } wake_up_process(worker->task); }