Re: kvm-on arm-guest network performance could reach only 60 percent of the native system

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I did some of that testing in the past and this may or may not apply.
I'm not sure what configuration you're running qemu virtio-net over mmio?
The performance properties (TSO, UFO, Csum offloading, ... i.e.
virtio_net_properties[])  don't get applied and when virtio-net guest
probes
it doesn't discover these features.

There is a way to check for this, and after applying the properties
performance
improved but not to level of virtio-net/pci which ran on x86_64. That
was on a
A15 1.7GHz.

- Mario

On 08/12/2014 02:33 AM, Marc Zyngier wrote:
> [adding the KVM/ARM mailing list instead of the "user" one...]
> 
> On 12/08/14 03:36, duqi wrote:
>> Hi  everyone,
>>
>> I have successfully boot the kvm guest on a paltform with Cortex-A7――Cubietruck. I have measured the guest's network perfomance compared with native system. 
>>
>> I used below command to conduct the listener and sender process:
>> (1) nc -lk <port> >/dev/null &
>> (2)dd if=/dev/zero bs=1M count=100 | nc <listener> <port>
>>
>> I found that the guest's network performance could only reach about
>> 60 percent of the native system. For example, If the native send rate
>> is 11.80MB/s, the KVM-guest send rate is only 6.40~7.0 MB/s; If the
>> native receive rate is 11.80MB/s, the KVM-guest receive  rate is 9.90
>> MB/s.  On the other hand , the Xen-guest performance is almost the
>> same as the native system.
> 
> I'm afraid you'll have to be more explicit about your Xen setup if you
> want us to understand the issue. Are you passing the physical device
> directly to the guest? Or are you using the Xen net-{front/back}?
> 
>> I have a confusion that why KVM-guest network performance droped so much? 
>> Could you give me some advices? 
> 
> Well, think of what this thing is doing for just a second :
> - Write data to memory
> - Write to the virtio doorbell
> - Trap to HYP
> - Return to the host kernel
> - Return to host user space
> - Handle the virtio request
> - Write to the socket (trapping back to host kernel)
> - Talk to the HW to transmit the packet
> - Return to userspace
> - Return to the host kernel
> - Return to HYP
> - Return to the guest
> 
> Given the above, I'd say that 60% of the native speed is pretty good,
> given that you're using a CPU that only has 256kB of L2 cache and a
> rather low clock speed.
> 
> Now, things will improve once we get vhost-net up and running, and even
> better with VFIO (but of course, you'll loose the ability to share the
> device with the latter).
> 
> You could also try to add the mq parameter to your command line
> (something like --network mode=tap,trans=mmio,mq=4) in order to enable
> multi-queue, which could give you better performance as well.
> 
> Thanks,
> 
> 	M.
> 

_______________________________________________
kvmarm mailing list
kvmarm@xxxxxxxxxxxxxxxxxxxxx
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm





[Index of Archives]     [Linux KVM]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux