Hi, I've recently started looking at RT Linux and specifically measuring worst case latency on an embedded ARM board using cyclictest. In my case, I use a host PC to ping flood the board in order to kick up the dust. I found that cyclictest results vary from one run to another. For example, I ran cyclictest for 10 hours and got a figure, then run a 5 min test after this which gave a worst case latency that was nearly 20% higher. This is not a one off. I talked this through with Carsten Emde at OSADL and he said they see the same thing, that is why they collate the output from a large number of runs to get an accurate view of the system. He said individual runs may be different because there is probably is an interference between the sampling and the clock frequency. Is it common knowledge that cyclictest results vary so much from one run to another? Any ideas how to mitigate this? One idea is to modify cyclictest to use an interval from a range provided by the user, instead of a fixed interval. Maybe sweep through the range repeatedly. What do you think? Thanks Phil -- To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html