Re: [RFC 1/5] ns: Introduce CPU Namespace

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Oct 09, 2021 at 08:42:39PM +0530, Pratik R. Sampat wrote:
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 2d9ff40f4661..0413175e6d73 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -27,6 +27,8 @@
>  #include "pelt.h"
>  #include "smp.h"
>  
> +#include <linux/cpu_namespace.h>
> +
>  /*
>   * Export tracepoints that act as a bare tracehook (ie: have no trace event
>   * associated with them) to allow external modules to probe them.
> @@ -7559,6 +7561,7 @@ long sched_setaffinity(pid_t pid, const struct cpumask *in_mask)
>  {
>  	cpumask_var_t cpus_allowed, new_mask;
>  	struct task_struct *p;
> +	cpumask_t temp;
>  	int retval;
>  
>  	rcu_read_lock();

You're not supposed to put a cpumask_t on stack. Those things can be
huge.

> @@ -7682,8 +7686,9 @@ SYSCALL_DEFINE3(sched_setaffinity, pid_t, pid, unsigned int, len,
>  long sched_getaffinity(pid_t pid, struct cpumask *mask)
>  {
>  	struct task_struct *p;
> +	cpumask_var_t temp;
>  	unsigned long flags;
> -	int retval;
> +	int retval, cpu;
>  
>  	rcu_read_lock();
>  
> @@ -7698,6 +7703,13 @@ long sched_getaffinity(pid_t pid, struct cpumask *mask)
>  
>  	raw_spin_lock_irqsave(&p->pi_lock, flags);
>  	cpumask_and(mask, &p->cpus_mask, cpu_active_mask);
> +	cpumask_clear(temp);

There's a distinct lack of allocating temp before use. Are you sure you
actually tested this?





[Index of Archives]     [Cgroups]     [Netdev]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux