Re: [net-next RFC V5 3/5] virtio: intorduce an API to set affinity for a virtqueue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jul 27, 2012 at 04:38:11PM +0200, Paolo Bonzini wrote:
> Il 05/07/2012 12:29, Jason Wang ha scritto:
> > Sometimes, virtio device need to configure irq affiniry hint to maximize the
> > performance. Instead of just exposing the irq of a virtqueue, this patch
> > introduce an API to set the affinity for a virtqueue.
> > 
> > The api is best-effort, the affinity hint may not be set as expected due to
> > platform support, irq sharing or irq type. Currently, only pci method were
> > implemented and we set the affinity according to:
> > 
> > - if device uses INTX, we just ignore the request
> > - if device has per vq vector, we force the affinity hint
> > - if the virtqueues share MSI, make the affinity OR over all affinities
> >  requested
> > 
> > Signed-off-by: Jason Wang <jasowang@xxxxxxxxxx>
> 
> Hmm, I don't see any benefit from this patch, I need to use
> irq_set_affinity (which however is not exported) to actually bind IRQs
> to CPUs.  Example:
> 
> with irq_set_affinity_hint:
>  43:   89  107  100   97   PCI-MSI-edge   virtio0-request
>  44:  178  195  268  199   PCI-MSI-edge   virtio0-request
>  45:   97  100   97  155   PCI-MSI-edge   virtio0-request
>  46:  234  261  213  218   PCI-MSI-edge   virtio0-request
> 
> with irq_set_affinity:
>  43:  721    0    0    1   PCI-MSI-edge   virtio0-request
>  44:    0  746    0    1   PCI-MSI-edge   virtio0-request
>  45:    0    0  658    0   PCI-MSI-edge   virtio0-request
>  46:    0    0    1  547   PCI-MSI-edge   virtio0-request
> 
> I gathered these quickly after boot, but real benchmarks show the same
> behavior, and performance gets actually worse with virtio-scsi
> multiqueue+irq_set_affinity_hint than with irq_set_affinity.
> 
> I also tried adding IRQ_NO_BALANCING, but the only effect is that I
> cannot set the affinity
> 
> The queue steering algorithm I use in virtio-scsi is extremely simple
> and based on your tx code.  See how my nice pinning is destroyed:
> 
> # taskset -c 0 dd if=/dev/sda bs=1M count=1000 of=/dev/null iflag=direct
> # cat /proc/interrupts
>  43:  2690 2709 2691 2696   PCI-MSI-edge      virtio0-request
>  44:   109  122  199  124   PCI-MSI-edge      virtio0-request
>  45:   170  183  170  237   PCI-MSI-edge      virtio0-request
>  46:   143  166  125  125   PCI-MSI-edge      virtio0-request
> 
> All my requests come from CPU#0 and thus go to the first virtqueue, but
> the interrupts are serviced all over the place.
> 
> Did you set the affinity manually in your experiments, or perhaps there
> is a difference between scsi and networking... (interrupt mitigation?)
> 
> Paolo


You need to run irqbalancer in guest to make it actually work. Do you?
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux