Re: Fw: Benchmarking for vhost polling patch

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Razya,
Thanks for the update.
So that's reasonable I think, and I think it makes sense
to keep working on this in isolation - it's more
manageable at this size.

The big questions in my mind:
- What happens if system is lightly loaded?
  E.g. a ping/pong benchmark. How much extra CPU are
  we wasting?
- We see the best performance on your system is with 10usec worth of polling.
  It's OK to be able to tune it for best performance, but
  most people don't have the time or the inclination.
  So what would be the best value for other CPUs?
- Should this be tunable from usespace per vhost instance?
  Why is it only tunable globally?
- How bad is it if you don't pin vhost and vcpu threads?
  Is the scheduler smart enough to pull them apart?
- What happens in overcommit scenarios? Does polling make things
  much worse?
  Clearly polling will work worse if e.g. vhost and vcpu
  share the host cpu. How can we avoid conflicts?

  For two last questions, better cooperation with host scheduler will
  likely help here.
  See e.g.  http://thread.gmane.org/gmane.linux.kernel/1771791/focus=1772505
  I'm currently looking at pushing something similar upstream,
  if it goes in vhost polling can do something similar.

Any data points to shed light on these questions?

On Thu, Jan 01, 2015 at 02:59:21PM +0200, Razya Ladelsky wrote:
> Hi Michael,
> Just a follow up on the polling patch numbers,..
> Please let me know if you find these numbers satisfying enough to continue 
> with submitting this patch.
> Otherwise - we'll have this patch submitted as part of the larger Elvis 
> patch set rather than independently.
> Thank you,
> Razya 
> 
> ----- Forwarded by Razya Ladelsky/Haifa/IBM on 01/01/2015 09:37 AM -----
> 
> From:   Razya Ladelsky/Haifa/IBM@IBMIL
> To:     mst@xxxxxxxxxx
> Cc: 
> Date:   25/11/2014 02:43 PM
> Subject:        Re: Benchmarking for vhost polling patch
> Sent by:        kvm-owner@xxxxxxxxxxxxxxx
> 
> 
> 
> Hi Michael,
> 
> > Hi Razya,
> > On the netperf benchmark, it looks like polling=10 gives a modest but
> > measureable gain.  So from that perspective it might be worth it if it's
> > not too much code, though we'll need to spend more time checking the
> > macro effect - we barely moved the needle on the macro benchmark and
> > that is suspicious.
> 
> I ran memcached with various values for the key & value arguments, and 
> managed to see a bigger impact of polling than when I used the default 
> values,
> Here are the numbers:
> 
> key=250     TPS      net    vhost vm   TPS/cpu  TPS/CPU
> value=2048           rate   util  util          change
> 
> polling=0   101540   103.0  46   100   695.47
> polling=5   136747   123.0  83   100   747.25   0.074440609
> polling=7   140722   125.7  84   100   764.79   0.099663658
> polling=10  141719   126.3  87   100   757.85   0.089688003
> polling=15  142430   127.1  90   100   749.63   0.077863015
> polling=25  146347   128.7  95   100   750.49   0.079107993
> polling=50  150882   131.1  100  100   754.41   0.084733701
> 
> Macro benchmarks are less I/O intensive than the micro benchmark, which is 
> why 
> we can expect less impact for polling as compared to netperf. 
> However, as shown above, we managed to get 10% TPS/CPU improvement with 
> the 
> polling patch.
> 
> > Is there a chance you are actually trading latency for throughput?
> > do you observe any effect on latency?
> 
> No.
> 
> > How about trying some other benchmark, e.g. NFS?
> > 
> 
> Tried, but didn't have enough I/O produced (vhost was at most at 15% util)

OK but was there a regression in this case?


> > 
> > Also, I am wondering:
> > 
> > since vhost thread is polling in kernel anyway, shouldn't
> > we try and poll the host NIC?
> > that would likely reduce at least the latency significantly,
> > won't it?
> > 
> 
> Yes, it could be a great addition at some point, but needs a thorough 
> investigation. In any case, not a part of this patch...
> 
> Thanks,
> Razya
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux