latency issue on wait_event/wake_up calls.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

I need help/advice on a little "performance" problem I have:

I am using linux 3.0.3 with the related RT patch on an ARM9 processor.

I have a RT task which is reading from a driver. The RT task is blocked on wait_event_interruptible() and the driver interrupt handler is releasing the waiting task using wake_up_interruptible() when there is data available.

By default on linux RT, driver IRQ handlers are run in threads with a priority of 50.

At first I setup my RT task with a priority of 10.

When some data are received by the system, I can see the following scheduling (using LTTng).

- irq_handler
- my_driver_irq_handler (priority 50)
- my_task (priority 10)

This is all good but I really need "my_task" to be high priority. Once an event is received, it need to process it immediately without being interrupted by further events coming on the device.

So now, I set my_task priority to 80. Now when data is received, I can see the following scheduling (still using LTTng).

- irq_handler
- my_driver_irq_handler (priority 50)
- my_task (priority 80) // this is a short run. I assume the task is checking the wait() "condition" - my_driver_irq_handler (priority 80) // here there seems to be an "automatic" priority inversion to allow the handler to run and finish the wake_up call
- my_task (priority 80) // run until it blocks again
- my_driver_irq_handler (priority 50) // end of the interrupt handler.

What is good is that now, "my_task" cannot be interrupted by another interrupt handler processing when it runs. But because of the 2 additional context switches when the interrupt handler wake_up() "my_task", I now experience an additional latency and this is not expected.

On average the latency performance is better when "my_task" priority is 10 but the real-time behavior is better when "my_task" priority is 80.

The latency difference is mainly due to the additional context switches we have when the interrupt handler wake_up() "my_task". So the question I have: Are these additional context switches mandatory? Is there a way to avoid them.

As an exercise I raised (programatically) the interrupt handler priority to 99 before doing the wake_up() and lower it to 50 just after the wake_up. With this change, I now have (with LTTng):

- irq_handler
- my_driver_irq_handler (priority 50 then 99) // run until the priority is set back to 50
- my_task (priority 80) // run until it blocks again
- my_driver_irq_handler (priority 50) // end of the interrupt handler.

Setting the irq handler priority is fast and cost almost nothing (compared to context switch). As a result, I don't have the additional latency cause by the 2 context switches of the previous example and I get the best of both worlds: A short latency and the expected real-time behavior (I am not interrupted by later interrupt handler).

But I feel that changing explicitly/dynamically the driver irq handler priority to get the expected behavior might not be the appropriate solution.

Am I missing something? Is there a better way to prevent the additional context switches?

Thanks for any advice/hint you may have.

JC
--
To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [RT Stable]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]

  Powered by Linux