On Tue, 1 Jul 2008, Wolfgang Grandegger wrote: > > I continue this thread because it's still not understood why enabling > CONFIG_RCU_TRACE is necessary to get reasonable latencies on the > MPC5200. It might also explain, why I get much worse latencies with > 2.6.25.8-rt7. Have you tried turning on ftrace? > > To recapitulate, with CONFIG_RCU_TRACE enabled, cyclictest reports max. > latencies of approx. 130 us with 2.6.24.4-rt4 on my MPC5200 PowerPC > board. If I disable it, the latency goes up to 600 us. Obviously, the > trace_mark() calls in rcupreempt*.c have some positive impact on the > latency. I narrowed down, that the 2 calls in __rcu_preempt_boost() in > rcupreempt-boost.c are the important one: > > void __rcu_preempt_unboost(void) > { > struct task_struct *curr = current; > struct rcu_boost_dat *rbd; > int prio; > unsigned long flags; > > trace_mark(unboost_called, "NULL"); > > /* if not boosted, then ignore */ > if (likely(!rcu_is_boosted(curr))) > return; I wonder if the "likely" is faulty on the PPC code generation. Have you tried removing that "likely" statement. > > /* > * Need to be very careful with NMIs. > * If we take the lock and an NMI comes in > * and it may try to unboost us if curr->rcub_rbdp > * is still set. So we zero it before grabbing the lock. > * But this also means that we might be boosted again > * so the boosting code needs to be aware of this. > */ > rbd = curr->rcub_rbdp; > curr->rcub_rbdp = NULL; > > /* > * Now an NMI might have came in after we grab > * the below lock. This check makes sure that > * the NMI doesn't try grabbing the lock > * while we already have it. > */ > if (unlikely(!rbd)) > return; Actually, remove all "likely" and "unlikely". The marker code could be making it work better. But still, this shouldn't cause 600us latencies. > > spin_lock_irqsave(&rbd->rbs_lock, flags); > /* > * It is still possible that an NMI came in > * between the "is_boosted" check and setting > * the rcu_rbdp to NULL. This would mean that > * the NMI already dequeued us. > */ > if (unlikely(!rcu_is_boosted(curr))) > goto out; > > list_del_init(&curr->rcub_entry); > > trace_mark(unboosted, "NULL"); > > curr->rcu_prio = MAX_PRIO; > > spin_lock(&curr->pi_lock); > prio = rt_mutex_getprio(curr); > task_setprio(curr, prio); > > curr->rcub_rbdp = NULL; > > spin_unlock(&curr->pi_lock); > out: > spin_unlock_irqrestore(&rbd->rbs_lock, flags); > } > > With them and all other trace_mark() calls commented out, the latency is > still OK. The first one has a bigger impact. > > In 2.6.25.8-rt7, trace_mark() is not used any more but a function > incrementing the corresponding counter directly and I suspect that's the > reason why I'm seeing high latencies with both, CONFIG_RCU_TRACE enabled > and disabled. > > I hope this observation sheds some light on the issue. It is still a mystery to me. Maybe looking at the different assembly outputs with the different configurations. -- Steve -- To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html