On Mon, Jun 29, 2009 at 06:58:41PM +0200, Jean Pihet wrote: > I am trying to get a different approach, starting from the errata > description. The idea is to avoid the counters from overflowing, > which could cause a PMNC unit reset or lock-up (or both). But this can't work. Oprofile essentially works as follows: You set the number (N) of events you wish to occur between each sample. When N events have occured, you record the stacktrace and reset the counter so it fires after another N events. Now, you could start the counters at zero every time, and then poll them via a timer. When the counter value is larger than N, you could log a stacktrace and zero the counter. However, this suffers one very serious problem - if you're wanting to measure something at an interval which occurs faster than your timer, you're going to get misleading results. You could set the timer to fire at a high rate, but then that's going to upset things like cache miss, cache hit, etc measurements. > Here are the implementation details: > - use a timer to read and reset the counters, then fire a work queue > - in the work queue the counters values are converted to oprofile samples > - the proper locking is used to avoid some races between the various tasks This sounds over complicated. I see no reason for a workqueue to be involved anywhere near the oprofile sample code. > I am nearly done with it but I am now running into problems with PM > (suspend/resume) and get_irq_regs(). You really really really can't use get_irq_regs() outside of IRQ context. The stored registers just do not exist anymore - they've been overwritten by whatever exception or system call you're currently in. You can't create a copy of them - copies will be overwritten on the very next (nested) interrupt. You don't know which interrupt is the first interrupt to occur. I really think that the only option here is to just accept that oprofile is crucified by this errata. -- To unsubscribe from this list: send the line "unsubscribe linux-omap" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html