Re: Network performance with small packets

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> "Michael S. Tsirkin" <mst@xxxxxxxxxx> 02/02/2011 03:11 AM
>
> On Tue, Feb 01, 2011 at 01:28:45PM -0800, Shirley Ma wrote:
> > On Tue, 2011-02-01 at 23:21 +0200, Michael S. Tsirkin wrote:
> > > Confused. We compare capacity to skb frags, no?
> > > That's sg I think ...
> >
> > Current guest kernel use indirect buffers, num_free returns how many
> > available descriptors not skb frags. So it's wrong here.
> >
> > Shirley
>
> I see. Good point. In other words when we complete the buffer
> it was indirect, but when we add a new one we
> can not allocate indirect so we consume.
> And then we start the queue and add will fail.
> I guess we need some kind of API to figure out
> whether the buf we complete was indirect?
>
> Another failure mode is when skb_xmit_done
> wakes the queue: it might be too early, there
> might not be space for the next packet in the vq yet.

I am not sure if this is the problem - shouldn't you
see these messages:
	if (likely(capacity == -ENOMEM)) {
		dev_warn(&dev->dev,
			"TX queue failure: out of memory\n");
	} else {
		dev->stats.tx_fifo_errors++;
		dev_warn(&dev->dev,
			"Unexpected TX queue failure: %d\n",
			capacity);
	}
in next xmit? I am not getting this in my testing.

> A solution might be to keep some kind of pool
> around for indirect, we wanted to do it for block anyway ...

Your vhost patch should fix this automatically. Right?

Thanks,

- KK

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux