On Fri, 19 Jan 2018 14:53:05 +0530 Pavan Kondeti <pkondeti@xxxxxxxxxxxxxx> wrote: > I am seeing "spinlock already unlocked" BUG for rd->rto_lock on a 4.9 > stable kernel based system. This issue is observed only after > inclusion of this patch. It appears to me that rq->rd can change > between spinlock is acquired and released in rto_push_irq_work_func() > IRQ work if hotplug is in progress. It was only reported couple of > times during long stress testing. The issue can be easily reproduced > if an artificial delay is introduced between lock and unlock of > rto_lock. The rq->rd is changed under rq->lock, so we can protect this > race with rq->lock. The below patch solved the problem. we are taking > rq->lock in pull_rt_task()->tell_cpu_to_push(), so I extended the same > here. Please let me know your thoughts on this. As so rq->rd can change. Interesting. > > diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c > index d863d39..478192b 100644 > --- a/kernel/sched/rt.c > +++ b/kernel/sched/rt.c > @@ -2284,6 +2284,7 @@ void rto_push_irq_work_func(struct irq_work *work) > raw_spin_unlock(&rq->lock); > } > > + raw_spin_lock(&rq->lock); What about just saving the rd then? struct root_domain *rd; rd = READ_ONCE(rq->rd); then use that. Then we don't need to worry about it changing. -- Steve > raw_spin_lock(&rq->rd->rto_lock); > > /* Pass the IPI to the next rt overloaded queue */ > @@ -2291,11 +2292,10 @@ void rto_push_irq_work_func(struct irq_work *work) > > raw_spin_unlock(&rq->rd->rto_lock); > > - if (cpu < 0) > - return; > - > /* Try the next RT overloaded CPU */ > - irq_work_queue_on(&rq->rd->rto_push_work, cpu); > + if (cpu >= 0) > + irq_work_queue_on(&rq->rd->rto_push_work, cpu); > + raw_spin_unlock(&rq->lock); > } > #endif /* HAVE_RT_PUSH_IPI */ > > > Thanks, > Pavan > -- To unsubscribe from this list: send the line "unsubscribe linux-tip-commits" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html
![]() |