Patch "sched/fair: Fix shift-out-of-bounds in load_balance()" has been added to the 5.10-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    sched/fair: Fix shift-out-of-bounds in load_balance()

to the 5.10-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     sched-fair-fix-shift-out-of-bounds-in-load_balance.patch
and it can be found in the queue-5.10 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit fdeb49f1b35cf30875fd8d2a8ec2089881fb2429
Author: Valentin Schneider <valentin.schneider@xxxxxxx>
Date:   Thu Feb 25 17:56:56 2021 +0000

    sched/fair: Fix shift-out-of-bounds in load_balance()
    
    [ Upstream commit 39a2a6eb5c9b66ea7c8055026303b3aa681b49a5 ]
    
    Syzbot reported a handful of occurrences where an sd->nr_balance_failed can
    grow to much higher values than one would expect.
    
    A successful load_balance() resets it to 0; a failed one increments
    it. Once it gets to sd->cache_nice_tries + 3, this *should* trigger an
    active balance, which will either set it to sd->cache_nice_tries+1 or reset
    it to 0. However, in case the to-be-active-balanced task is not allowed to
    run on env->dst_cpu, then the increment is done without any further
    modification.
    
    This could then be repeated ad nauseam, and would explain the absurdly high
    values reported by syzbot (86, 149). VincentG noted there is value in
    letting sd->cache_nice_tries grow, so the shift itself should be
    fixed. That means preventing:
    
      """
      If the value of the right operand is negative or is greater than or equal
      to the width of the promoted left operand, the behavior is undefined.
      """
    
    Thus we need to cap the shift exponent to
      BITS_PER_TYPE(typeof(lefthand)) - 1.
    
    I had a look around for other similar cases via coccinelle:
    
      @expr@
      position pos;
      expression E1;
      expression E2;
      @@
      (
      E1 >> E2@pos
      |
      E1 >> E2@pos
      )
    
      @cst depends on expr@
      position pos;
      expression expr.E1;
      constant cst;
      @@
      (
      E1 >> cst@pos
      |
      E1 << cst@pos
      )
    
      @script:python depends on !cst@
      pos << expr.pos;
      exp << expr.E2;
      @@
      # Dirty hack to ignore constexpr
      if exp.upper() != exp:
         coccilib.report.print_report(pos[0], "Possible UB shift here")
    
    The only other match in kernel/sched is rq_clock_thermal() which employs
    sched_thermal_decay_shift, and that exponent is already capped to 10, so
    that one is fine.
    
    Fixes: 5a7f55590467 ("sched/fair: Relax constraint on task's load during load balance")
    Reported-by: syzbot+d7581744d5fd27c9fbe1@xxxxxxxxxxxxxxxxxxxxxxxxx
    Signed-off-by: Valentin Schneider <valentin.schneider@xxxxxxx>
    Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
    Signed-off-by: Ingo Molnar <mingo@xxxxxxxxxx>
    Link: http://lore.kernel.org/r/000000000000ffac1205b9a2112f@xxxxxxxxxx
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index a0239649c741..c80d1a039d19 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7735,8 +7735,7 @@ static int detach_tasks(struct lb_env *env)
 			 * scheduler fails to find a good waiting task to
 			 * migrate.
 			 */
-
-			if ((load >> env->sd->nr_balance_failed) > env->imbalance)
+			if (shr_bound(load, env->sd->nr_balance_failed) > env->imbalance)
 				goto next;
 
 			env->imbalance -= load;
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index fac1b121d113..fdebfcbdfca9 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -205,6 +205,13 @@ static inline void update_avg(u64 *avg, u64 sample)
 	*avg += diff / 8;
 }
 
+/*
+ * Shifting a value by an exponent greater *or equal* to the size of said value
+ * is UB; cap at size-1.
+ */
+#define shr_bound(val, shift)							\
+	(val >> min_t(typeof(shift), shift, BITS_PER_TYPE(typeof(val)) - 1))
+
 /*
  * !! For sched_setattr_nocheck() (kernel) only !!
  *



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux