RE: [PATCH] vhost: Add polling mode

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



David Laight <David.Laight@xxxxxxxxxx> wrote on 21/08/2014 05:29:41 PM:

> From: David Laight <David.Laight@xxxxxxxxxx>
> To: Razya Ladelsky/Haifa/IBM@IBMIL, "Michael S. Tsirkin" 
<mst@xxxxxxxxxx>
> Cc: "abel.gordon@xxxxxxxxx" <abel.gordon@xxxxxxxxx>, Alex Glikson/
> Haifa/IBM@IBMIL, Eran Raichstein/Haifa/IBM@IBMIL, Joel Nider/Haifa/
> IBM@IBMIL, "kvm@xxxxxxxxxxxxxxx" <kvm@xxxxxxxxxxxxxxx>, "linux-
> kernel@xxxxxxxxxxxxxxx" <linux-kernel@xxxxxxxxxxxxxxx>, 
> "netdev@xxxxxxxxxxxxxxx" <netdev@xxxxxxxxxxxxxxx>, 
> "virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx" 
> <virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx>, Yossi 
Kuperman1/Haifa/IBM@IBMIL
> Date: 21/08/2014 05:31 PM
> Subject: RE: [PATCH] vhost: Add polling mode
> 
> From: Razya Ladelsky
> > "Michael S. Tsirkin" <mst@xxxxxxxxxx> wrote on 20/08/2014 01:57:10 PM:
> > 
> > > > Results:
> > > >
> > > > Netperf, 1 vm:
> > > > The polling patch improved throughput by ~33% (1516 MB/sec -> 
> 2046 MB/sec).
> > > > Number of exits/sec decreased 6x.
> > > > The same improvement was shown when I tested with 3 vms running 
netperf
> > > > (4086 MB/sec -> 5545 MB/sec).
> > > >
> > > > filebench, 1 vm:
> > > > ops/sec improved by 13% with the polling patch. Number of exits
> > > > was reduced by 31%.
> > > > The same experiment with 3 vms running filebench showed similar 
numbers.
> > > >
> > > > Signed-off-by: Razya Ladelsky <razya@xxxxxxxxxx>
> > >
> > > This really needs more thourough benchmarking report, including
> > > system data.  One good example for a related patch:
> > > http://lwn.net/Articles/551179/
> > > though for virtualization, we need data about host as well, and if 
you
> > > want to look at streaming benchmarks, you need to test different 
message
> > > sizes and measure packet size.
> > >
> > 
> > Hi Michael,
> > I have already tried running netperf with several message sizes:
> > 64,128,256,512,600,800...
> > But the results are inconsistent even in the baseline/unpatched
> > configuration.
> > For smaller msg sizes, I get consistent numbers. However, at some 
point,
> > when I increase the msg size
> > I get unstable results. For example, for a 512B msg, I get two 
scenarios:
> > vm utilization 100%, vhost utilization 75%, throughput ~6300
> > vm utilization 80%, vhost utilization 13%, throughput ~9400 (line 
rate)
> > 
> > I don't know why vhost is behaving that way for certain message sizes.
> > Do you have any insight to why this is happening?
> 
> Have you tried looking at the actual ethernet packet sizes.
> It may well jump between using small packets (the size of the writes)
> and full sized ones.

I will check it,
Thanks,
Razya

> 
> If you are trying to measure ethernet packet 'cost' you need to use UDP.
> However that probably uses different code paths.
> 
>    David
> 
> 
> 

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux