Robert de Vries wrote:
Beware that the mask is not updated immediately. It is updated the next time an interrupt is serviced. This may take a long time if there are only a few interrupts sent or none.
Thanks! This was the missing link: this matched my observation that only those interrupts don't change their cpumask that are not activly in use. And for seldomly used interrupts, the cpumask change occurs delayed. Is there a special reason why the affinity doesn't change immediately?
I'll write something up about this in the Real-Time Linux Wiki. On a related note: I am working on a user space library to make it easy to do CPU shielding of other processes (what you do with isolcpus=1, but then dynamic)
That would be cool :-) > using the cpuset feature of the kernel. Hopefully, cpusets avoid the problem as side-noted the "isocpus" parameter in Documentation/kernel-parameters.txt ---- 8< ---- This option is the preferred way to isolate CPUs. The alternative -- manually setting the CPU mask of all tasks in the system -- can cause problems and suboptimal load balancer performance ---- >8 ----
I have only just begun to develop this library so only the very basic features are implemented. Let me know if someone is interested in using it.
This certainly sounds interesting! However, i intend to try out how well hrtimers behave on a dedicated CPU on kernel level. I have attached a small hrtimers demo with a callback function running in interrupt context. I can imagine that pretty much all data and code involved fits into L2 cache and not more pages then TLB-entries are used. This would heavily increase determinism. BTW.: what a pity that x86 doesn't has cacheline-locking instructions (such as PowerPC) - then all interrupt related data and code could be marked with i.e. "__irq" and moved to a separate sections that are cache-locked upon startup. regards Bernhard
Attachment:
hrtimer_demo-20070416.tgz
Description: application/compressed-tar