Wolfgang Denk wrote:
In message <005a01c614fb$2fe76b00$10eca8c0@grendel> you wrote:
There is no "ideal" value for a given processor frequency.
The lower the value, the less interrupt processing overhead,
but the slower the response time to events that are detected
or serviced during clock interrupts. 1000 HZ *may* be a sensible
value (I have my doubts, personally) for 2+ GHz PC processors,
but it's excessive (IMHO) for a 200MHz processor and unworkable
for a 20MHz CPU. I think that 100HZ is still a reasonable value
for an embedded RISC CPU, but the "ideal" value is going to
be a function of the application.
We did some tests of the performance impact of 100 vs. 1000 Hz clock
frequency on low end systems (50 MHz PowerPC); for details please see
http://www.denx.de/wiki/view/Know/Clock100vs1000Hz
My own results, on an SMP 2.6 kernel (as opposed to the uniprocessor
2.4 kernel used for the experiments reported) have been rather different.
Certainly the degradations I observed were far worse than the 5-10% reported
in the document you cite. I'll try to repeat your experiment when I get
the time.
BTW, I'm puzzled by the "context switch" benchmark test results. By what
mechanism - or by what definition of "context switch" - can having more
frequent interrupts make context switches happen more quickly? It seems
to me that those results must be due to a systematic measurement error
being added/removed.
Regards,
Kevin K