Paul E. McKenney wrote:
Governing the timeout by context-switch overhead sounds even better to me.
Really easy to calibrate, and short critical sections are of much shorter
duration than are a context-switch pair.
Yeah, fully agree. This is on my research "todo" list. My theory is
that the ultimate adaptive-timeout algorithm here would essentially be
the following:
*) compute the context-switch pair time average for the system. This is
your time threshold (CSt).
*) For each lock, maintain an average hold-time (AHt) statistic (I am
assuming this can be done cheaply...perhaps not).
The adaptive code would work as follows:
if (AHt > CSt) /* dont even bother if the average is greater than CSt */
timeout = 0;
else
timeout = AHt;
if (adaptive_wait(timeout))
sleep();
Anyone have some good ideas on how to compute CSt? I was thinking you
could create two kthreads that message one another (measuring round-trip
time) for some number (say 100) to get an average. You could probably
just approximate it with flushing workqueue jobs.
-Greg
Thanx, Paul
Sven
Thanx, Paul
-
To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
-
To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html