On Mon, May 07, 2012 at 03:00:17PM -0700, Frank Rowand wrote: | This is the resulting message on the ARM panda with a 'bad' | 32khz timer: | | | # cyclictest -q -p 80 -t -n -l 10 -h ${hist_bins} -R 100 | reported clock resolution: 1 nsec | measured clock resolution less than: 30517 nsec How about using a fixed loop size (say 1000000 clock reads) to define the average cost of reading the clock (the second value presented above) instead of a variable amount of iterations? Reading the clock twice and calculating the average could lead to wrong impressions. Also, it would be interesting to run such test under a real time priority (FIFO:2, maybe?) to avoid too much external interference on the readings, mainly involuntary context switches. Having too different values called 'clock resolution' may be a good source of confusion. The value of clock_getres() is the resolution, as per the system jargon, and the second value should be granularity, reading cost, the-average-time-it-takes-to-read-the-clock or something alike. | A possible follow on patch would be to generate a hard | error (fail the test) if the measured resolution was | above some unreasonable value (perhaps > 1 msec), but | allow the hard fail to be overridden with yet another | command line option. Any opinions about that? My suggestion is to keep the current behavior and add an option to stop/complain case the clock has a poor resolution or has a reading cost too high. Luis -- [ Luis Claudio R. Goncalves Red Hat - Realtime Team ] [ Fingerprint: 4FDD B8C4 3C59 34BD 8BE9 2696 7203 D980 A448 C8F8 ] -- To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html