Changing and using housekeeping and isolated CPUs requires reboot. The goal is to change CPU isolation dynamically without reboot. This patch is based on the parent patch cgroup/cpuset: Exclude isolated CPUs from housekeeping CPU masks https://lore.kernel.org/lkml/20240821142312.236970-3-longman@xxxxxxxxxx/ Its purpose is to update isolation cpumasks. However, some subsystems may use outdated housekeeping CPU masks. To prevent the use of these isolated CPUs, it is essential to explicitly propagate updates to the housekeeping masks across all subsystems that depend on them. This patch is not intended to be merged and disrupt the kernel. It is still a proof-of-concept for research purposes. The questions are: - Is this the right direction, or should I explore an alternative approach? - What factors need to be considered? - Any suggestions or advice? - Have similar attempts been made before? Update the affinity of kthreadd and trigger the recalculation of kthread affinities using kthreads_online_cpu(). The argument passed to kthreads_online_cpu() is irrelevant, as the function reassigns affinities of kthreads based on their preferred_affinity and the updated housekeeping_cpumask(HK_TYPE_KTHREAD). Currently only RCU uses kthread_affine_preferred(). I dare to try calling kthread_affine_preferred() from kthread_run() to set preferred_affinity as cpu_possible_mask for kthreads without a specific affinity, enabling their management through kthreads_online_cpu(). Any objections? For details about kthreads affinity patterns please see: https://lore.kernel.org/lkml/20241211154035.75565-16-frederic@xxxxxxxxxx/ Signed-off-by: Costa Shulyupin <costa.shul@xxxxxxxxxx> --- include/linux/kthread.h | 5 ++++- kernel/cgroup/cpuset.c | 1 + kernel/kthread.c | 6 ++++++ 3 files changed, 11 insertions(+), 1 deletion(-) diff --git a/include/linux/kthread.h b/include/linux/kthread.h index 8d27403888ce9..b43c5aeb2cfd7 100644 --- a/include/linux/kthread.h +++ b/include/linux/kthread.h @@ -52,8 +52,10 @@ bool kthread_is_per_cpu(struct task_struct *k); ({ \ struct task_struct *__k \ = kthread_create(threadfn, data, namefmt, ## __VA_ARGS__); \ - if (!IS_ERR(__k)) \ + if (!IS_ERR(__k)) { \ + kthread_affine_preferred(__k, cpu_possible_mask); \ wake_up_process(__k); \ + } \ __k; \ }) @@ -270,4 +272,5 @@ struct cgroup_subsys_state *kthread_blkcg(void); #else static inline void kthread_associate_blkcg(struct cgroup_subsys_state *css) { } #endif +void kthreads_update_affinity(void); #endif /* _LINUX_KTHREAD_H */ diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index 65658a5c2ac81..7d71acc7f46b6 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -1355,6 +1355,7 @@ static void update_isolation_cpumasks(bool isolcpus_updated) trl(); ret = housekeeping_exlude_isolcpus(isolated_cpus, HOUSEKEEPING_FLAGS); WARN_ON_ONCE((ret < 0) && (ret != -EOPNOTSUPP)); + kthreads_update_affinity(); } /** diff --git a/kernel/kthread.c b/kernel/kthread.c index c4574c2d37e0d..2488cdf8aec17 100644 --- a/kernel/kthread.c +++ b/kernel/kthread.c @@ -1763,3 +1763,9 @@ struct cgroup_subsys_state *kthread_blkcg(void) return NULL; } #endif + +void kthreads_update_affinity(void) +{ + set_cpus_allowed_ptr(kthreadd_task, housekeeping_cpumask(HK_TYPE_KTHREAD)); + kthreads_online_cpu(-1); +} -- 2.47.0