Hello together!
I'm developing an eth-interface module and I've the following issue...
In my module I create a kernel thread with the thread handler
'rx_thread_action' (please see below).
(It polls a message queue which is provided by the driver of a SCI card
but that's unimportant.)
static inline void
rx_action(struct net_device *dev, u8 *data, int datalen) {
// encapsulate data...
...
write_timestamp(data);
// ...and pass it to upper levels
netif_rx(skb);
}
static int
rx_thread_action(void *d) {
...
while(!signal_pending(current)) {
...
if(receive(message_queue, buf, size,...)) {
rx_action(dev, buf, size);
}
. ...
}
...
}
The messages which are received by the message queue get a timestamp
before they will be passed by 'netif_rx()' to the kernel.
I've wrote a user space application which gets these messages by
'recvfrom()' through an UDP socket.
So it can calculate the time difference between the timestamp in the
packet and the time when the packet comes in through the socket...
The results are really bad because I get times up to 4ms which is
equivalent to one jiffie (CONFIG_HZ=250).
If I receive many packets then the average time between calling
'netif_rx(skb)' and getting the packet by 'recvfrom()' on my user space
socket amounts to 2ms.
Why does the kernel need such a long time (i.e. one jiffie) between the
calling of 'netif_rx()' and passing the message to the belonging user
space socket?
Is it because I call 'netif_rx()' in a kernel thread (in process
context) and normally it will be called by a network card driver in
interrupt context?
How could I get rid of this time loss?
Many Thanks for any help!
Regards,
Lukas
--
To unsubscribe from this list: send an email with
"unsubscribe kernelnewbies" to ecartis@xxxxxxxxxxxx
Please read the FAQ at http://kernelnewbies.org/FAQ