Re: performance of virtual functions compared to virtio

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 2011-04-25 at 13:07 -0600, David Ahern wrote:
> On 04/25/11 12:13, Alex Williamson wrote:
> >> So, basically, 192.168.102 is the network where the VMs have a VF, and
> >> 192.168.103 is the network where the VMs use virtio for networking.
> >>
> >> The netperf commands are all run on either Host-A or VM-C:
> >>
> >>   netperf -H $ip -jcC -v 2 -t TCP_RR      -- -r 1024 -D L,R
> >>   netperf -H $ip -jcC -v 2 -t TCP_STREAM  -- -m 1024 -D L,R
> >>
> >>
> >>                    latency      throughput
> >>                     (usec)         Mbps
> >> cross-host:
> >>   A-B, eth2          185            932
> >>   A-B, eth3          185            935
> > 
> > This is actually PF-PF, right?  It would be interesting to load igbvf on
> > the hosts and determine VF-VF latency as well.
> 
> yes, PF-PF. eth3 has the added bridge layer, but from what I can see the
> overhead is noise. I added host-to-host to put the host-to-VM numbers in
> perspective.
> 
> > 
> >> same host, host-VM:
> >>   A-C, using VF      488           1085 (seen as high as 1280's)
> >>   A-C, virtio        150           4282
> > 
> > We know virtio has a shorter path for this test.
> 
> No complaints about the throughput numbers; the latency is the problem.
> 
> > 
> >> cross-host, host-VM:
> >>   A-D, VF            489            938
> >>   A-D, virtio        288            889
> >>
> >> cross-host, VM-VM:
> >>   C-D, VF            488            934
> >>   C-D, virtio        490            933
> >>
> >>
> >> While throughput for VFs is fine (near line-rate when crossing hosts),
> > 
> > FWIW, it's not too difficult to get line rate on a 1Gbps network, even
> > some of the emulated NICs can do it.  There will be a difference in host
> > CPU power to get it though, where it should theoretically be emulated >
> > virtio > pci-assign.
> 
> 10GB is the goal; 1GB offers a cheaper learning environment. ;-)
> 
> > 
> >> the latency is horrible. Any options to improve that?
> > 
> > If you don't mind testing, I'd like to see VF-VF between the hosts (to
> > do this, don't assign eth2 an IP, just make sure it's up, then load the
> > igbvf driver on the host and assign an IP to one of the VFs associated
> > with the eth2 PF), and cross host testing using the PF for the guest
> > instead of the VF.  This should help narrow down how much of the latency
> > is due to using the VF vs the PF, since all of the virtio tests are
> > using the PF.  I've been suspicious that the VF adds some latency, but
> > haven't had a good test setup (or time) to dig very deep into it.
> 
> It's a quad nic, so I left eth2 and eth3 alone and added the VF-VF test
> using VFs on eth4.
> 
> Indeed latency is 488 usec and throughput is 925 Mbps. This is
> host-to-host using VFs.

So we're effectively getting host-host latency/throughput for the VF,
it's just that in the 82576 implementation of SR-IOV, the VF takes a
latency hit that puts it pretty close to virtio.  Unfortunate.  I think
you'll find that passing the PF to the guests should be pretty close to
that 185us latency.  I would assume (hope) the higher end NICs reduce
this, but it seems to be a hardware limitation, so it's hard to predict.
Thanks,

Alex

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux