On 01.05.2013 09:37, Carsten Emde wrote: > He who laughs last, laughs loudest. Since the entire system slows down, > you may only need to adapt cycles and thresholds accordingly. Yep, I will try it. However as this is connected to a real application running parallel and generating some load pattern this is not so easy - for example I'll need to go down with the HZ, otherwise the timer ticking once per ms can be already too much, which in turn will probably affect the RR threads etc. There are also constraints from the outside world that have to be met while running - these are still far, but with tracing enabled not anymore an order of magnitude far. That I am running this on a low-spec embedded systems that already struggles with saving the data does not make it easier either. >From what I was already able to capture I did not see anything that seems plainly wrong, the ksoftirqd just had really a lot of work with hrtimers, RCU, network etc. But I am not sure whether I caught _the_ situation. > Tracing is a very useful tool to identify sources of latencies > and has helped a lot to make Linux RT as good as it is today. Yep, the infrastructure is fantastic. > cyclictest -m -M -Sp90 -i500 -d0 -fb1000 > > probably will break at the first occurrence of the latency in question > and let you diagnose its origin at the end of the trace output. I tried this but I was unable to load a trace.dat produced by breaking the tracing this way into kernelshark on at least two occasions. Only stopping the trace from trace-cmd itself worked. Are there some known issues? I am using trace-cmd from git. The recording and examining system are not the same, but are reasonably similar library-wise (both are Debian squeeze-based). Thanks -- Stano -- To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html