On Fri, May 21, 2021 at 05:41:01PM +0100, Qais Yousef wrote: > On 05/18/21 10:47, Will Deacon wrote: > > In preparation for replaying user affinity requests using a saved mask, > > split sched_setaffinity() up so that the initial task lookup and > > security checks are only performed when the request is coming directly > > from userspace. > > > > Signed-off-by: Will Deacon <will@xxxxxxxxxx> > > --- > > kernel/sched/core.c | 110 +++++++++++++++++++++++--------------------- > > 1 file changed, 58 insertions(+), 52 deletions(-) > > > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > > index 9512623d5a60..808bbe669a6d 100644 > > --- a/kernel/sched/core.c > > +++ b/kernel/sched/core.c > > @@ -6788,9 +6788,61 @@ SYSCALL_DEFINE4(sched_getattr, pid_t, pid, struct sched_attr __user *, uattr, > > return retval; > > } > > > > -long sched_setaffinity(pid_t pid, const struct cpumask *in_mask) > > +static int > > +__sched_setaffinity(struct task_struct *p, const struct cpumask *mask) > > { > > + int retval; > > cpumask_var_t cpus_allowed, new_mask; > > + > > + if (!alloc_cpumask_var(&cpus_allowed, GFP_KERNEL)) > > + return -ENOMEM; > > + > > + if (!alloc_cpumask_var(&new_mask, GFP_KERNEL)) > > + return -ENOMEM; > > Shouldn't we free cpus_allowed first? Oops, yes. Now fixed. Thanks, Will