Hello, We're doing some scheduling latency measurements with KVM. While analyzing some traces and reading cyclictest code in the process, I found out that by default cyclictest uses the absolute time algorithm, which basically does: clock_gettime(&now) next = now + interval /* interval == 1000us */ /* do some quick stuff */ while() { clock_nanosleep(&next) /* ie. sleeps until now + 1000us, 'next' is abs */ clock_gettime(&now) diff = calcdiff(now, next) /* update a bunch of stats and the histogram data, also check if we're finished */ next += interval } Now, doesn't this mean that the timerthread will actually sleep less than interval? This is so because we have fixed sleeping points which don't take into consideration the sleeping latency nor the bunch of things the timerthread does (eg. update histogram). If I'm making sense, won't this behavior cause better numbers to be reported? I compared abstime and reltime in bare-metal, and got the following results (cyclictest [-r] -m -n -q -p99 -l 1000000): abstime mode: # Min Latencies: 00001 # Avg Latencies: 00001 # Max Latencies: 00003 reltime mode: # Min Latencies: 00003 # Avg Latencies: 00003 # Max Latencies: 00008 (Yes, this machine is pretty modern and well setup for RT. The results above are pretty deterministic. Also, I've ran hwaltdetect for hours and got no SMIs). The relative time algorithm, on the other hand, is exactly what I expected I would see when I looked at the code: /* do some quick stuff */ while() { clock_getttime(&now) clock_nanosleep(&interval) /* interval == 1000us */ expected = now + interval clock_getttime(&now) diff = calcdiff(now, expected); /* update a bunch of stats and the histogram data, also check if we're finished */ } So, my question boils down to: is there a relevant difference between the two modes? Why isn't reltime the default mode? -- To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html