Re: [v3 RFC PATCH 0/4] Implement multiqueue virtio-net

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Oct 28, 2010 at 12:48:57PM +0530, Krishna Kumar2 wrote:
> > Krishna Kumar2/India/IBM wrote on 10/28/2010 10:44:14 AM:
> >
> > > > > > Results for UDP BW tests (unidirectional, sum across
> > > > > > 3 iterations, each iteration of 45 seconds, default
> > > > > > netperf, vhosts bound to cpus 0-3; no other tuning):
> > > > >
> > > > > Is binding vhost threads to CPUs really required?
> > > > > What happens if we let the scheduler do its job?
> > > >
> > > > Nothing drastic, I remember BW% and SD% both improved a
> > > > bit as a result of binding.
> > >
> > > If there's a significant improvement this would mean that
> > > we need to rethink the vhost-net interaction with the scheduler.
> >
> > I will get a test run with and without binding and post the
> > results later today.
> 
> Correction: The result with binding is is much better for
> SD/CPU compared to without-binding:

Something that was suggested to me off-list is
trying to set smp affinity for NIC: in host to guest
case probably virtio-net, for external to guest
the host NIC as well.

-- 
MST
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux