Re: kvm-on arm-guest network performance could reach only 60 percent of the native system

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12/08/14 10:33, Marc Zyngier wrote:
> [adding the KVM/ARM mailing list instead of the "user" one...]
> 
> On 12/08/14 03:36, duqi wrote:
>> Hi  everyone,
>>
>> I have successfully boot the kvm guest on a paltform with Cortex-A7――Cubietruck. I have measured the guest's network perfomance compared with native system. 
>>
>> I used below command to conduct the listener and sender process:
>> (1) nc -lk <port> >/dev/null &
>> (2)dd if=/dev/zero bs=1M count=100 | nc <listener> <port>
>>
>> I found that the guest's network performance could only reach about
>> 60 percent of the native system. For example, If the native send rate
>> is 11.80MB/s, the KVM-guest send rate is only 6.40~7.0 MB/s; If the
>> native receive rate is 11.80MB/s, the KVM-guest receive  rate is 9.90
>> MB/s.  On the other hand , the Xen-guest performance is almost the
>> same as the native system.
> 
> I'm afraid you'll have to be more explicit about your Xen setup if you
> want us to understand the issue. Are you passing the physical device
> directly to the guest? Or are you using the Xen net-{front/back}?
> 
>> I have a confusion that why KVM-guest network performance droped so much? 
>> Could you give me some advices? 
> 
> Well, think of what this thing is doing for just a second :
> - Write data to memory
> - Write to the virtio doorbell
> - Trap to HYP
> - Return to the host kernel
> - Return to host user space
> - Handle the virtio request
> - Write to the socket (trapping back to host kernel)
> - Talk to the HW to transmit the packet
> - Return to userspace
> - Return to the host kernel
> - Return to HYP
> - Return to the guest
> 
> Given the above, I'd say that 60% of the native speed is pretty good,
> given that you're using a CPU that only has 256kB of L2 cache and a
> rather low clock speed.
> 
> Now, things will improve once we get vhost-net up and running, and even
> better with VFIO (but of course, you'll loose the ability to share the
> device with the latter).
> 
> You could also try to add the mq parameter to your command line
> (something like --network mode=tap,trans=mmio,mq=4) in order to enable
> multi-queue, which could give you better performance as well.

Just did this myself, on a very similar HW (ony a 100MB/s ethernet though):

* On the host:
root@cubieboard2:~# dd if=/dev/zero bs=1M count=100 | nc approximate 34567
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 9.60296 s, 10.9 MB/s

* On the guest:
root@muffin-man:~# dd if=/dev/zero bs=1M count=100 | nc approximate 34567
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 9.74799 s, 10.8 MB/s

* Command line
./lkvm run -c2 -m 640 -k zImage-3.15-rc6 --console virtio -d
~root/debian_vexpress_cf.img -n trans=mmio,mode=tap,tapif=kvm0,mq=4 -p
"console=hvc0 root=/dev/vda1"

Given what we do, I'd say "good enough".

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...
_______________________________________________
kvmarm mailing list
kvmarm@xxxxxxxxxxxxxxxxxxxxx
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm





[Index of Archives]     [Linux KVM]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux