On 11/28/2011 01:23 AM, Michael S. Tsirkin wrote:
On Fri, Nov 25, 2011 at 01:35:52AM -0500, David Miller wrote:
From: Krishna Kumar2<krkumar2@xxxxxxxxxx>
Date: Fri, 25 Nov 2011 09:39:11 +0530
Jason Wang<jasowang@xxxxxxxxxx> wrote on 11/25/2011 08:51:57 AM:
My description is not clear again :(
I mean the same vhost thead:
vhost thread #0 transmits packets of flow A on processor M
...
vhost thread #0 move to another process N and start to transmit packets
of flow A
Thanks for clarifying. Yes, binding vhosts to CPU's
makes the incoming packet go to the same vhost each
time. BTW, are you doing any binding and/or irqbalance
when you run your tests? I am not running either at
this time, but thought both might be useful.
So are we going with this patch or are we saying that vhost binding
is a requirement?
I think it's a good idea to make sure we understand the problem
root cause well before applying the patch. We still
have a bit of time before 3.2. In particular, why does
the vhost thread bounce between CPUs so much?
Other than this, since we could not assume the behavior of the under
nic, using rxhash to identify a flow is more generic way.
Long term it seems the best way is to expose the preferred mapping
from the guest and forward it to the device.
I was working on this and hope to post it soon.
_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/virtualization