Re: 8% performance improved by change tap interact with kernel stack

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2014/1/28 22:49, Eric Dumazet wrote:
On Tue, 2014-01-28 at 16:14 +0800, Qin Chuanyu wrote:
according perf test result,I found that there are 5%-8% cpu cost on
softirq by use netif_rx_ni called in tun_get_user.

so I changed the function which cause skb transmitted more quickly.
from
	tun_get_user	->
		 netif_rx_ni(skb);
to
	tun_get_user	->
		rcu_read_lock_bh();
		netif_receive_skb(skb);
		rcu_read_unlock_bh();

No idea why you use rcu here ?

In my first version, I forgot to add lock when called netif_receive_skb
then I met a dad spinlock when using tcpdump.

tcpdump receive skb in netif_receive_skb but also in dev_queue_xmit.
and I have notice dev_queue_xmit add rcu_read_lock_bh before transmitting skb, and this lock avoid race between softirq and transmit thread.
	/* Disable soft irqs for various locks below. Also
	 * stops preemption for RCU.
	 */
	rcu_read_lock_bh();
Now I try to xmit skb in vhost thread, so I did the same thing.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux