On 11/25/2011 11:07 AM, Krishna Kumar2 wrote:
"Michael S. Tsirkin"<mst@xxxxxxxxxx> wrote on 11/24/2011 09:44:31 PM:
As far as I can see, ixgbe binds queues to physical cpu, so let
consider:
vhost thread transmits packets of flow A on processor M
during packet transmission, ixgbe driver programs the card to
deliver the packet of flow A to queue/cpu M through flow director
(see ixgbe_atr())
vhost thread then receives packet of flow A with from M
...
vhost thread transmits packets of flow A on processor N
ixgbe driver programs the flow director to change the delivery of
flow A to queue N ( cpu N )
vhost thread then receives packet of flow A with from N
...
So, for a single flow A, we may get different queue mappings. Using
rxhash instead may solve this issue.
Or better, transmit a single flow from a single vhost thread.
If packets of a single flow get spread over different CPUs,
they will get reordered and things are not going to work well.
My testing so far shows that guest sends on (e.g.) TXQ#2
only, which is handled by vhost#2; and this doesn't change
for the entire duration of the test. Incoming keeps
changing for different packets but become same with
this patch. To iterate, I have not seen the following:
Yes because guest chose the txq of virtio-net based on hash.
"
vhost thread transmits packets of flow A on processor M
...
vhost thread transmits packets of flow A on processor N
"
My description is not clear again :(
I mean the same vhost thead:
vhost thread #0 transmits packets of flow A on processor M
...
vhost thread #0 move to another process N and start to transmit packets
of flow A
- KK
_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/virtualization