Re: 8% performance improved by change tap interact with kernel stack

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2014/1/28 17:41, Michael S. Tsirkin wrote:
I think it's okay - IIUC this way we are processing xmit directly
instead of going through softirq.
Was meaning to try this - I'm glad you are looking into this.

Could you please check latency results?

netperf UDP_RR 512
test model: VM->host->host

modified before : 11108
modified after  : 11480

3% gained by this patch


Nice.
What about CPU utilization?
It's trivially easy to speed up networking by
burning up a lot of CPU so we must make sure it's
not doing that.
And I think we should see some tests with TCP as well, and
try several message sizes.


Yes, by burning up more CPU we could get better performance easily.
So I have bond vhost thread and interrupt of nic on CPU1 while testing.

modified before, the idle of CPU1 is 0%-1% while testing.
and after modify, the idle of CPU1 is 2%-3% while testing

TCP also could gain from this, but pps is less than UDP, so I think the improvement would be not so obviously.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux