On 02/11/2014 10:25 PM, Qin Chuanyu wrote: > we could xmit directly instead of going through softirq to gain > throughput and lantency improved. > test model: VM-Host-Host just do transmit. with vhost thread and nic > interrupt bind cpu1. netperf do throuhput test and qperf do lantency > test. > Host OS: suse11sp3, Guest OS: suse11sp3 > > latency result(us): > packet_len 64 256 512 1460 > old(UDP) 44 47 48 66 > new(UDP) 38 41 42 66 > > old(TCP) 52 55 70 117 > new(TCP) 45 48 61 114 > > throughput result(Gbit/s): > packet_len 64 512 1024 1460 > old(UDP) 0.42 2.02 3.75 4.68 > new(UDP) 0.45 2.14 3.77 5.06 > > TCP due to the latency, client couldn't send packet big enough > to get benefit from TSO of nic, so the result show it will send > more packet per sencond but get lower throughput. > > Eric mentioned that it would has problem with cgroup, but the patch > had been sent by Herbert Xu. > patch_id f845172531fb7410c7fb7780b1a6e51ee6df7d52 > > Signed-off-by: Chuanyu Qin <qinchuanyu@xxxxxxxxxx> > --- A question: without NAPI weight, could this starve other net devices? > drivers/net/tun.c | 4 +++- > 1 files changed, 3 insertions(+), 1 deletions(-) > > diff --git a/drivers/net/tun.c b/drivers/net/tun.c > index 44c4db8..90b4e58 100644 > --- a/drivers/net/tun.c > +++ b/drivers/net/tun.c > @@ -1184,7 +1184,9 @@ static ssize_t tun_get_user(struct tun_struct > *tun, struct tun_file *tfile, > skb_probe_transport_header(skb, 0); > > rxhash = skb_get_hash(skb); > - netif_rx_ni(skb); > + rcu_read_lock_bh(); > + netif_receive_skb(skb); > + rcu_read_unlock_bh(); > > tun->dev->stats.rx_packets++; > tun->dev->stats.rx_bytes += len; -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html