On Sat, 2008-02-23 at 13:31 +0100, Andi Kleen wrote: > > *) compute the context-switch pair time average for the system. This is > > your time threshold (CSt). > Hi Andi, > This is not a uniform time. Consider the difference between > context switch on the same hyperthread, context switch between cores > on a die, context switch between sockets, context switch between > distant numa nodes. You could have several orders of magnitude > between all those. > > > > > *) For each lock, maintain an average hold-time (AHt) statistic (I am > > assuming this can be done cheaply...perhaps not). > > That would assume that the hold times are very uniform. But what happens > when you e.g. have a workload where 50% of the lock aquisitions are short > and 30% are long? > > I'm a little sceptical of such "too clever" algorithms. > I agree, this does not make any sense until you get to very large SMP systems - and only iff it can be demonstrated, that the overhead associated with lock statistics and including some notion of the context switch overhead still comes out with a net gain. There is some notion of task-routing in the RT scheduler already, and this is quite a clever little algorithm. I see no reason, why the scheduler should not eventually take directly into account (when balancing), the quantifiable context-switch and CPU overhead of moving a task to a distant processor. In fact, for a deadline-based scheduler working on high-frequency tasks, given that the times can switch so radically, this is would be requiredOnve context-switch-overhead data is available to the scheduler, there is no particular reason why adaptive locking could not also utilize that data. Sven > -Andi > -- > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/ - To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html