Re: [RT] BUG in sched/cpupri.c

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 22/12/2021 20:48, Valentin Schneider wrote:
> On 22/12/21 18:46, Dietmar Eggemann wrote:
>> On 21.12.21 17:45, John Keeping wrote:
>>> On Tue, 21 Dec 2021 16:11:34 +0000
>>> Valentin Schneider <valentin.schneider@xxxxxxx> wrote:
>>>
>>>> On 20/12/21 18:35, Dietmar Eggemann wrote:

[...]

>> switched_from_rt() -> rt_queue_pull_task(, pull_rt_task)
>>   pull_rt_task()->tell_cpu_to_push()->irq_work_queue_on(&rq->rd->rto_push_work,)
>>     rto_push_irq_work_func() -> push_rt_task(rq, true)
>>
>> seems to be the only way with pull=true.
>>
>> In my tests, rq->rt.rt_nr_running seems to be 0 when it happens.
>>
>> [   22.288537] CPU3 switched_to_rt: p=[ksoftirqd/3 35]
>> [   22.288554] rt_mutex_setprio: CPU3 p=[ksoftirqd/3 35] pi_task=[rcu_preempt 11] queued=1 running=0 prio=98 oldprio=120
>> [   22.288636] CPU3 switched_from_rt: p=[ksoftirqd/3 35] rq->rt.rt_nr_running=0
>>                                                          ^^^^^^^^^^^^^^^^^^^^^^
>> [   22.288649] rt_mutex_setprio: CPU3 p=[ksoftirqd/3 35] queued=1 running=1 prio=120 oldprio=98
>> [   22.288681] CPU3 push_rt_task: next_task=[rcu_preempt 11] migr_dis=1 rq->curr=[ksoftirqd/3 35] pull=1
>>                                                              ^^^^^^^^^^                           ^^^^^^
> 
> mark_wakeup_next_waiter() first deboosts the previous owner and then
> wakeups the next top waiter. Seems like you somehow have the wakeup happen
> before the push_rt_task IRQ work is run. Also, tell_cpu_to_push() should
> only pick a CPU that is in rq->rd->rto_mask, which requires having at least
> 2 RT tasks there...

True, this_cpu has rt_nr_running = 0 and *cpu* has rt_nr_running >= 2:

  mark_wakeup_next_waiter()

    (1) /* deboost */
    rt_mutex_adjust_prio()

      rt_mutex_setprio(current, ...)

        rq = __task_rq_lock(current, );
        check_class_changed(rq, p, prev_class, oldprio)

          switched_from_rt()

            if (!task_on_rq_queued(p) || rq->rt.rt_nr_running)
              return;

            rt_queue_pull_task(rq)

              queue_balance_callback(rq, ..., pull_rt_task);

                pull_rt_task()

                  tell_cpu_to_push()

                    *cpu* = rto_next_cpu(rq->rd)
                    irq_work_queue_on(&rq->rd->rto_push_work, *cpu*)

                      rto_push_irq_work_func()
                        push_rt_task(rq, true) <-- !!!

    (2) /* waking the top waiter */
    rt_mutex_wake_q_add(wqh, waiter);

> Now, that wakeup from the rtmutex unlock would give us a resched_curr() via
> check_preempt_curr() if required, which is good, though I think we are
> still missing some for sched_setscheduler() (there are no wakeups
> there). So if we just have to live with an IRQ work popping in before we
> get to preempt_schedule_irq() (or somesuch), then perhaps the below would
> be sufficient.

I think that's the case here but we are on the RT overloaded CPU (*cpu*).

> 
>> What about slightly changing the layout in switched_from_rt() (only lightly tested):
>>
>>
>> @@ -2322,7 +2338,15 @@ static void switched_from_rt(struct rq *rq, struct task_struct *p)
>>          * we may need to handle the pulling of RT tasks
>>          * now.
>>          */
>> -       if (!task_on_rq_queued(p) || rq->rt.rt_nr_running)
>> +       if (!task_on_rq_queued(p))
>> +               return;
>> +
>> +       if (task_current(rq, p) && (p->sched_class < &rt_sched_class)) {
>> +               resched_curr(rq);
>> +               return;
>> +       }
>> +
>> +       if (rq->rt.rt_nr_running)
>>                 return;
>>
>>         rt_queue_pull_task(rq);
> 
> If !rq->rt.rt_nr_running then there's no point in issuing a reschedule (at
> least from RT's perspective; p->sched_class->switched_to() takes care of
> the rest)

Right.

[...]

>  /*
> diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
> index ef8228d19382..8f3e3a1367b6 100644
> --- a/kernel/sched/rt.c
> +++ b/kernel/sched/rt.c
> @@ -1890,6 +1890,16 @@ static int push_rt_task(struct rq *rq, bool pull)
>  	if (!next_task)
>  		return 0;
>  
> +	/*
> +	 * It's possible that the next_task slipped in of higher priority than
> +	 * current, or current has *just* changed priority.  If that's the case
> +	 * just reschedule current.
> +	 */
> +	if (unlikely(next_task->prio < rq->curr->prio)) {
> +		resched_curr(rq);
> +		return 0;
> +	}

IMHO, that's the bit which prevents the BUG.

But this would also prevent the case in which rq->curr is an RT task
with lower prio than next_task.

Also `rq->curr = migration/X` goes still though which is somehow fine
since find_lowest_rq() bails out for if (task->nr_cpus_allowed == 1).

And DL tasks (like sugov:X go through and they can have
task->nr_cpus_allowed > 1 (arm64 slow-switching boards with shared
freuency domains with schedutil). cpupri_find_fitness()->convert_prio()
can handle  task_pri, p->prio = -1 (CPUPRI_INVALID) although its somehow
by coincidence.

So maybe something like this:

@ -1898,6 +1898,11 @@ static int push_rt_task(struct rq *rq, bool pull)
                if (!pull || rq->push_busy)
                        return 0;

+               if (rq->curr->sched_class != &rt_sched_class) {
+                       resched_curr(rq);
+                       return 0;
+               }
+
                cpu = find_lowest_rq(rq->curr);

[...]



[Index of Archives]     [RT Stable]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]

  Powered by Linux