On Thu, Oct 14, 2010 at 01:28:58PM +0530, Krishna Kumar2 wrote: > "Michael S. Tsirkin" <mst@xxxxxxxxxx> wrote on 10/12/2010 10:39:07 PM: > > > > Sorry for the delay, I was sick last couple of days. The results > > > with your patch are (%'s over original code): > > > > > > Code BW% CPU% RemoteCPU > > > MQ (#txq=16) 31.4% 38.42% 6.41% > > > MQ+MST (#txq=16) 28.3% 18.9% -10.77% > > > > > > The patch helps CPU utilization but didn't help single stream > > > drop. > > > > > > Thanks, > > > > What other shared TX/RX locks are there? In your setup, is the same > > macvtap socket structure used for RX and TX? If yes this will create > > cacheline bounces as sk_wmem_alloc/sk_rmem_alloc share a cache line, > > there might also be contention on the lock in sk_sleep waitqueue. > > Anything else? > > The patch is not introducing any locking (both vhost and virtio-net). > The single stream drop is due to different vhost threads handling the > RX/TX traffic. > > I added a heuristic (fuzzy) to determine if more than one flow > is being used on the device, and if not, use vhost[0] for both > tx and rx (vhost_poll_queue figures this out before waking up > the suitable vhost thread). Testing shows that single stream > performance is as good as the original code. ... > This approach works nicely for both single and multiple stream. > Does this look good? > > Thanks, > > - KK Yes, but I guess it depends on the heuristic :) What's the logic? -- MST -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html