Re: kvm-on arm-guest network performance could reach only 60 percent of the native system

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



[adding the KVM/ARM mailing list instead of the "user" one...]

On 12/08/14 03:36, duqi wrote:
> Hi  everyone,
> 
> I have successfully boot the kvm guest on a paltform with Cortex-A7――Cubietruck. I have measured the guest's network perfomance compared with native system. 
> 
> I used below command to conduct the listener and sender process:
> (1) nc -lk <port> >/dev/null &
> (2)dd if=/dev/zero bs=1M count=100 | nc <listener> <port>
> 
> I found that the guest's network performance could only reach about
> 60 percent of the native system. For example, If the native send rate
> is 11.80MB/s, the KVM-guest send rate is only 6.40~7.0 MB/s; If the
> native receive rate is 11.80MB/s, the KVM-guest receive  rate is 9.90
> MB/s.  On the other hand , the Xen-guest performance is almost the
> same as the native system.

I'm afraid you'll have to be more explicit about your Xen setup if you
want us to understand the issue. Are you passing the physical device
directly to the guest? Or are you using the Xen net-{front/back}?

> I have a confusion that why KVM-guest network performance droped so much? 
> Could you give me some advices? 

Well, think of what this thing is doing for just a second :
- Write data to memory
- Write to the virtio doorbell
- Trap to HYP
- Return to the host kernel
- Return to host user space
- Handle the virtio request
- Write to the socket (trapping back to host kernel)
- Talk to the HW to transmit the packet
- Return to userspace
- Return to the host kernel
- Return to HYP
- Return to the guest

Given the above, I'd say that 60% of the native speed is pretty good,
given that you're using a CPU that only has 256kB of L2 cache and a
rather low clock speed.

Now, things will improve once we get vhost-net up and running, and even
better with VFIO (but of course, you'll loose the ability to share the
device with the latter).

You could also try to add the mq parameter to your command line
(something like --network mode=tap,trans=mmio,mq=4) in order to enable
multi-queue, which could give you better performance as well.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...
_______________________________________________
kvmarm mailing list
kvmarm@xxxxxxxxxxxxxxxxxxxxx
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm





[Index of Archives]     [Linux KVM]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux