On Fri, May 21, 2021 at 05:25:24PM +0100, Qais Yousef wrote: > On 05/18/21 10:47, Will Deacon wrote: > > Asymmetric systems may not offer the same level of userspace ISA support > > across all CPUs, meaning that some applications cannot be executed by > > some CPUs. As a concrete example, upcoming arm64 big.LITTLE designs do > > not feature support for 32-bit applications on both clusters. > > > > Modify guarantee_online_cpus() to take task_cpu_possible_mask() into > > account when trying to find a suitable set of online CPUs for a given > > task. This will avoid passing an invalid mask to set_cpus_allowed_ptr() > > during ->attach() and will subsequently allow the cpuset hierarchy to be > > taken into account when forcefully overriding the affinity mask for a > > task which requires migration to a compatible CPU. > > > > Cc: Li Zefan <lizefan@xxxxxxxxxx> > > Cc: Tejun Heo <tj@xxxxxxxxxx> > > Cc: Johannes Weiner <hannes@xxxxxxxxxxx> > > Signed-off-by: Will Deacon <will@xxxxxxxxxx> > > --- > > include/linux/cpuset.h | 2 +- > > kernel/cgroup/cpuset.c | 33 +++++++++++++++++++-------------- > > 2 files changed, 20 insertions(+), 15 deletions(-) > > > > diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h > > index ed6ec677dd6b..414a8e694413 100644 > > --- a/include/linux/cpuset.h > > +++ b/include/linux/cpuset.h > > @@ -185,7 +185,7 @@ static inline void cpuset_read_unlock(void) { } > > static inline void cpuset_cpus_allowed(struct task_struct *p, > > struct cpumask *mask) > > { > > - cpumask_copy(mask, cpu_possible_mask); > > + cpumask_copy(mask, task_cpu_possible_mask(p)); > > } > > > > static inline void cpuset_cpus_allowed_fallback(struct task_struct *p) > > diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c > > index 8c799260a4a2..b532a5333ff9 100644 > > --- a/kernel/cgroup/cpuset.c > > +++ b/kernel/cgroup/cpuset.c > > @@ -372,18 +372,26 @@ static inline bool is_in_v2_mode(void) > > } > > > > /* > > - * Return in pmask the portion of a cpusets's cpus_allowed that > > - * are online. If none are online, walk up the cpuset hierarchy > > - * until we find one that does have some online cpus. > > + * Return in pmask the portion of a task's cpusets's cpus_allowed that > > + * are online and are capable of running the task. If none are found, > > + * walk up the cpuset hierarchy until we find one that does have some > > + * appropriate cpus. > > * > > * One way or another, we guarantee to return some non-empty subset > > * of cpu_online_mask. > > * > > * Call with callback_lock or cpuset_mutex held. > > */ > > -static void guarantee_online_cpus(struct cpuset *cs, struct cpumask *pmask) > > +static void guarantee_online_cpus(struct task_struct *tsk, > > + struct cpumask *pmask) > > { > > - while (!cpumask_intersects(cs->effective_cpus, cpu_online_mask)) { > > + struct cpuset *cs = task_cs(tsk); > > task_cs() requires rcu_read_lock(), but I can't see how the lock is obtained > from cpuset_attach() path, did I miss it? Running with lockdep should spill > suspicious RCU usage warning. > > Maybe it makes more sense to move the rcu_read_lock() inside the function now > with task_cs()? Well spotted! I'll add the rcu_read_[un]lock() calls to guarantee_online_cpus(). Will