On Sun, 25 May 2008, Thomas Gleixner wrote: > > - preempt_disable(); /* TSC's are per-cpu */ > > + preempt_disable(); > > + cpu = smp_processor_id(); > > rdtscl(bclock); > > do { > > rep_nop(); > > rdtscl(now); > > + /* Allow RT tasks to run */ > > + preempt_enable(); > > + preempt_disable(); > > + /* > > + * It is possible that we moved to another CPU, > > + * and since TSC's are per-cpu we need to > > + * calculate that. The delay must guarantee that > > + * we wait "at least" the amount of time. Being > > + * moved to another CPU could make the wait longer > > + * but we just need to make sure we waited long > > + * enough. Rebalance the counter for this CPU. > > + */ > > + if (unlikely(cpu != smp_processor_id())) { > > Eeek, once you migrated you do this all the time. you need to update > cpu here. Good catch! I'll update that. > > > + if ((now-bclock) >= loops) > > + break; > > Also this is really dangerous with unsynchronized TSCs. You might get > migrated and return immediately because the TSC on the other CPU is > far ahead. No it isn't ;-) The now and bclock are both from before the migration. The cpus were the same becaues we were under preempt disbled at the time. I recalculate after the change has been noticed. But you are right, I forgot to update cpu. :-/ > > What you really want is something like the patch below, but we should > reuse the sched_clock_cpu() thingy to make that simpler. Looking into > that right now. > Sure, but this should be simple enough. -- Steve -- To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html