Network performance with small packets - continued

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We've been doing some more experimenting with the small packet network 
performance problem in KVM.  I have a different setup than what Steve D. was 
using so I re-baselined things on the kvm.git kernel on both the host and 
guest with a 10GbE adapter.  I also made use of the virtio-stats patch.

The virtual machine has 2 vCPUs, 8GB of memory and two virtio network adapters 
(the first connected to a 1GbE adapter and a LAN, the second connected to a 
10GbE adapter that is direct connected to another system with the same 10GbE 
adapter) running the kvm.git kernel.  The test was a TCP_RR test with 100 
connections from a baremetal client to the KVM guest using a 256 byte message 
size in both directions.

I used the uperf tool to do this after verifying the results against netperf.  
Uperf allows the specification of the number of connections as a parameter in 
an XML file as opposed to launching, in this case, 100 separate instances of 
netperf.

Here is the baseline for baremetal using 2 physical CPUs:
  Txn Rate: 206,389.59 Txn/Sec, Pkt Rate: 410,048 Pkts/Sec
  TxCPU: 7.88%  RxCPU: 99.41%

To be sure to get consistent results with KVM I disabled the hyperthreads, 
pinned the qemu-kvm process, vCPUs, vhost thread and ethernet adapter 
interrupts (this resulted in runs that differed by only about 2% from lowest 
to highest).  The fact that pinning is required to get consistent results is a 
different problem that we'll have to look into later...

Here is the KVM baseline (average of six runs):
  Txn Rate: 87,070.34 Txn/Sec, Pkt Rate: 172,992 Pkts/Sec
  Exits: 148,444.58 Exits/Sec
  TxCPU: 2.40%  RxCPU: 99.35%
About 42% of baremetal.

The virtio stats output showed alot of kick_notify happening when the ring was 
empty.  So I coded a quick patch to delay freeing of the used Tx buffers until 
more than half the ring was used (I did not test this under a stream condition 
so I don't know if this would have a negative impact).  Here are the results 
from delaying the freeing of used Tx buffers (average of six runs):
  Txn Rate: 90,886.19 Txn/Sec, Pkt Rate: 180,571 Pkts/Sec
  Exits: 142,681.67 Exits/Sec
  TxCPU: 2.78%  RxCPU: 99.36%
About a 4% increase over baseline and about 44% of baremetal.

This spread out the kick_notify but still resulted in alot of them.  I decided 
to build on the delayed Tx buffer freeing and code up an "ethtool" like 
coalescing patch in order to delay the kick_notify until there were at least 5 
packets on the ring or 2000 usecs, whichever occurred first.  Here are the 
results of delaying the kick_notify (average of six runs):
  Txn Rate: 107,106.36 Txn/Sec, Pkt Rate: 212,796 Pkts/Sec
  Exits: 102,587.28 Exits/Sec
  TxCPU: 3.03%  RxCPU: 99.33%
About a 23% increase over baseline and about 52% of baremetal.

Running the perf command against the guest I noticed almost 19% of the time 
being spent in _raw_spin_lock.  Enabling lockstat in the guest showed alot of 
contention in the "irq_desc_lock_class". Pinning the virtio1-input interrupt 
to a single cpu in the guest and re-running the last test resulted in 
tremendous gains (average of six runs):
  Txn Rate: 153,696.59 Txn/Sec, Pkt Rate: 305,358 Pkgs/Sec
  Exits: 62,603.37 Exits/Sec
  TxCPU: 3.73%  RxCPU: 98.52%
About a 77% increase over baseline and about 74% of baremetal.

Vhost is receiving a lot of notifications for packets that are to be 
transmitted (over 60% of the packets generate a kick_notify).  Also, it looks 
like vhost is sending a lot of notifications for packets it has received 
before the guest can get scheduled to disable notifications and begin 
processing the packets resulting in some lock contention in the guest (and 
high interrupt rates).

Some thoughts for the transmit path...  can vhost be enhanced to do some 
adaptive polling so that the number of kick_notify events are reduced and 
replaced by kick_no_notify events?

Comparing the transmit path to the receive path, the guest disables 
notifications after the first kick and vhost re-enables notifications after 
completing processing of the tx ring.  Can a similar thing be done for the 
receive path?  Once vhost sends the first notification for a received packet 
it can disable notifications and let the guest re-enable notifications when it 
has finished processing the receive ring.  Also, can the virtio-net driver do 
some adaptive polling (or does napi take care of that for the guest)?

Running the same workload on the same configuration with a different 
hypervisor results in performance that is almost equivalent to baremetal 
without doing any pinning.

Thanks,
Tom Lendacky
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux