Re: The status about vhost-net on kvm-arm?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 2014/8/13 17:10, Nikolay Nikolaev wrote:
> On Tue, Aug 12, 2014 at 6:47 PM, Nikolay Nikolaev
> <n.nikolaev@xxxxxxxxxxxxxxxxxxxxxx> wrote:
>>
>> Hello,
>>
>>
>> On Tue, Aug 12, 2014 at 5:41 AM, Li Liu <john.liuli@xxxxxxxxxx> wrote:
>>>
>>> Hi all,
>>>
>>> Is anyone there can tell the current status of vhost-net on kvm-arm?
>>>
>>> Half a year has passed from Isa Ansharullah asked this question:
>>> http://www.spinics.net/lists/kvm-arm/msg08152.html
>>>
>>> I have found two patches which have provided the kvm-arm support of
>>> eventfd and irqfd:
>>>
>>> 1) [RFC PATCH 0/4] ARM: KVM: Enable the ioeventfd capability of KVM on ARM
>>> http://lists.gnu.org/archive/html/qemu-devel/2014-01/msg01770.html
>>>
>>> 2) [RFC,v3] ARM: KVM: add irqfd and irq routing support
>>> https://patches.linaro.org/32261/
>>>
>>> And there's a rough patch for qemu to support eventfd from Ying-Shiuan Pan:
>>>
>>> [Qemu-devel] [PATCH 0/4] ioeventfd support for virtio-mmio
>>> https://lists.gnu.org/archive/html/qemu-devel/2014-02/msg00715.html
>>>
>>> But there no any comments of this patch. And I can found nothing about qemu
>>> to support irqfd. Do I lost the track?
>>>
>>> If nobody try to fix it. We have a plan to complete it about virtio-mmio
>>> supporing irqfd and multiqueue.
>>>
>>>
>>
>> we at Virtual Open Systems did some work and tested vhost-net on ARM
>> back in March.
>> The setup was based on:
>>  - host kernel with our ioeventfd patches:
>> http://www.spinics.net/lists/kvm-arm/msg08413.html
>>
>> - qemu with the aforementioned patches from Ying-Shiuan Pan
>> https://lists.gnu.org/archive/html/qemu-devel/2014-02/msg00715.html
>>
>> The testbed was ARM Chromebook with Exynos 5250, using a 1Gbps USB3
>> Ethernet adapter connected to a 1Gbps switch. I can't find the actual
>> numbers but I remember that with multiple streams the gain was clearly
>> seen. Note that it used the minimum required ioventfd implementation
>> and not irqfd.
>>
>> I guess it is feasible to think that it all can be put together and
>> rebased + the recent irqfd work. One can achiev even better
>> performance (because of the irqfd).
>>
> 
> Managed to replicate the setup with the old versions e used in March:
> 
> Single stream from another machine to chromebook with 1Gbps USB3
> Ethernet adapter.
> iperf -c <address> -P 1 -i 1 -p 5001 -f k -t 10
> to HOST: 858316 Kbits/sec
> to GUEST: 761563 Kbits/sec
> 
> 10 parallel streams
> iperf -c <address> -P 10 -i 1 -p 5001 -f k -t 10
> to HOST: 842420 Kbits/sec
> to GUEST: 625144 Kbits/sec
> 

Appreciate your work. Is it convenient for you to test the same cases
without vhost=on? Then the results will show the improvement of performance
clearly only with ioeventfd.

I will try to test it with a Hisilicon board which is ongoing.

Best regards

Li

>>
>>>
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> kvmarm mailing list
>>> kvmarm@xxxxxxxxxxxxxxxxxxxxx
>>> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
>>
>>
>> regards,
>> Nikolay Nikolaev
>> Virtual Open Systems
> 
> .
> 

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux