On 12/06/2011 09:15 PM, Stefan Hajnoczi wrote:
On Tue, Dec 6, 2011 at 10:21 AM, Jason Wang<jasowang@xxxxxxxxxx> wrote:
On 12/06/2011 05:18 PM, Stefan Hajnoczi wrote:
On Tue, Dec 6, 2011 at 6:33 AM, Jason Wang<jasowang@xxxxxxxxxx> wrote:
On 12/05/2011 06:55 PM, Stefan Hajnoczi wrote:
On Mon, Dec 5, 2011 at 8:59 AM, Jason Wang<jasowang@xxxxxxxxxx>
wrote:
The vcpus are just threads and may not be bound to physical CPUs, so
what is the big picture here? Is the guest even in the position to
set the best queue mappings today?
Not sure it could publish the best mapping but the idea is to make sure the
packets of a flow were handled by the same guest vcpu and may be the same
vhost thread in order to eliminate the packet reordering and lock
contention. But this assumption does not take the bouncing of vhost or vcpu
threads which would also affect the result.
Okay, this is why I'd like to know what the big picture here is. What
solution are you proposing? How are we going to have everything from
guest application, guest kernel, host threads, and host NIC driver
play along so we get the right steering up the entire stack. I think
there needs to be an answer to that before changing virtio-net to add
any steering mechanism.
Consider the complexity of the host nic each with their own steering
features, this series make the first step with minimal effort to try to
let guest driver and host tap/macvtap co-operate like what physical nic
does. There may be other method, but performance numbers is also needed
to give the answer.
Stefan
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html