Re: Time loss after calling netif_rx() in a kernel thread

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Mulyadi!

Mulyadi Santosa wrote:
Anyway, sched_yield() doesn't really work like it used to be in 2.4 or
old O(1) in 2.6.
Oh, I didn't know this fact... :-/
But on the other hand I use 2.6.22 and the new CFScheduler we've since v2.6.23...

Why I use yield():
I've just read Robert Love's Book about "Linux Kernel Development" where he writes about the O(1) scheduler. There I've read about the sched_yield() function, that the calling process would free the CPU it's running on and will be inserted to the array of the expired processes... But there's no sched_yield() function in 2.6.22. There's sys_sched_yield() but it's not exported. And there's the exported yield() function which I use:

--- linux-2.6.22/kernel/sched.c ---
...
/**
 * yield - yield the current processor to other threads.
 *
 * This is a shortcut for kernel-space yielding - it marks the
 * thread runnable and calls sys_sched_yield().
 */
void __sched yield(void)
{
        set_current_state(TASK_RUNNING);
        sys_sched_yield();
}
EXPORT_SYMBOL(yield);
...
---

I just guess, maybe what you mean is schedule()?
yield()...IIRC...means putting current->need_resched to 1 or
TRUE....thus, it will be checked whenever scheduler is invoked.....and
that could be long sometimes.

I've tried 'schedule()' in the past and I can tell you the results:

If I call
 netif_rx(); schedule();
or
 netif_rx(); yield();
then I get bad latencies.

If I call:
 netif_rx(); do_softirq();
then I get very good latencies.


BTW, -rt is real time...soft one..not hard real time. Just in case you
need bounded latency, whereas normal scheduler could give you
unpredictable or unbounded latency.

Thanks for this information but our driver should work with the vanilla kernel.

So, for linux-2.6.22 it only works with do_softirq()...
I can't run the code with newer kernels because the underlaying "API" doesn't support newer kernels than 2.6.22 at this moment.

======
BTW:
I've also a solution where I outsource the "netif_rx() stuff" into a tasklet. So, if my polling kernel thread receives a message through the underlaying message queue, then it schedules the tasklet which calls netif_rx() on the received data. The problem is that the beginning of the execution of the tasklet also takes about 0.5 jiffie (2ms if HZ=250) in average and max one jiffie... If I call yield() after tasklet_schedule() then it is executed "immediately" and I get good latencies but they naturally aren't as good as with the do_softirq() solution (without tasklets).
======


Many Thanks again for your hints!

Best regards,
Lukas


--
To unsubscribe from this list: send an email with
"unsubscribe kernelnewbies" to ecartis@xxxxxxxxxxxx
Please read the FAQ at http://kernelnewbies.org/FAQ


[Index of Archives]     [Newbies FAQ]     [Linux Kernel Mentors]     [Linux Kernel Development]     [IETF Annouce]     [Git]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux SCSI]     [Linux ACPI]
  Powered by Linux