Re: [patch 08/13] sched: Clenaup PREEMPT_COUNT leftovers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 14/09/20 21:42, Thomas Gleixner wrote:
CONFIG_PREEMPT_COUNT is now unconditionally enabled and will be
removed. Cleanup the leftovers before doing so.

Signed-off-by: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
Cc: Ingo Molnar <mingo@xxxxxxxxxx>
Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
Cc: Juri Lelli <juri.lelli@xxxxxxxxxx>
Cc: Vincent Guittot <vincent.guittot@xxxxxxxxxx>
Cc: Dietmar Eggemann <dietmar.eggemann@xxxxxxx>
Cc: Steven Rostedt <rostedt@xxxxxxxxxxx>
Cc: Ben Segall <bsegall@xxxxxxxxxx>
Cc: Mel Gorman <mgorman@xxxxxxx>
Cc: Daniel Bristot de Oliveira <bristot@xxxxxxxxxx>

Small nit below;

Reviewed-by: Valentin Schneider <valentin.schneider@xxxxxxx>

---
 kernel/sched/core.c |    6 +-----
 lib/Kconfig.debug   |    1 -
 2 files changed, 1 insertion(+), 6 deletions(-)

--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3706,8 +3706,7 @@ asmlinkage __visible void schedule_tail(
       * finish_task_switch() for details.
       *
       * finish_task_switch() will drop rq->lock() and lower preempt_count
-	 * and the preempt_enable() will end up enabling preemption (on
-	 * PREEMPT_COUNT kernels).

I suppose this wanted to be s/PREEMPT_COUNT/PREEMPT/ in the first place,
which ought to be still relevant.

+	 * and the preempt_enable() will end up enabling preemption.
       */

      rq = finish_task_switch(prev);



[Index of Archives]     [Video for Linux]     [Yosemite News]     [Linux S/390]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux