Re: [Qemu-devel] Re: [PATCH 2/3] virtio-pci: Use ioeventfd for virtqueue notify

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Dec 1, 2010 at 12:30 PM, Avi Kivity <avi@xxxxxxxxxx> wrote:
> On 12/01/2010 01:44 PM, Stefan Hajnoczi wrote:
>>
>> >>
>> >>  And, what about efficiency?  As in bits/cycle?
>> >
>> >  We are running benchmarks with this latest patch and will report
>> > results.
>>
>> Full results here (thanks to Khoa Huynh):
>>
>> http://wiki.qemu.org/Features/VirtioIoeventfd
>>
>> The host CPU utilization is scaled to 16 CPUs so a 2-3% reduction is
>> actually in the 32-48% range for a single CPU.
>>
>> The guest CPU utilization numbers include an efficiency metric: %vcpu
>> per MB/sec.  Here we see significant improvements too.  Guests that
>> previously couldn't get more CPU work done now have regained some
>> breathing space.
>
> Thanks for those numbers.  The guest improvements were expected, but the
> host numbers surprised me.  Do you have an explanation as to why total host
> load should decrease?

The first vcpu does virtqueue kick - it holds the guest driver
vblk->lock across kick.  Before this kick completes a second vcpu
tries to acquire vblk->lock, finds it is contended, and spins.  So
we're burning CPU due to the long vblk->lock hold times.

With virtio-ioeventfd those kick times are reduced an there is less
contention on vblk->lock.

Stefan
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux