On Wed, May 25, 2011 at 10:58:26AM +0930, Rusty Russell wrote: > On Mon, 23 May 2011 14:19:00 +0300, "Michael S. Tsirkin" <mst@xxxxxxxxxx> wrote: > > On Mon, May 23, 2011 at 11:37:15AM +0930, Rusty Russell wrote: > > > Can we hit problems with OOM? Sure, but no worse than now... > > > The problem is that this "virtqueue_get_capacity()" returns the worst > > > case, not the normal case. So using it is deceptive. > > > > > > > Maybe just document this? > > Yes, but also by renaming virtqueue_get_capacity(). Takes it from a 3 > to a 6 on the API hard-to-misuse scale. > > How about, virtqueue_min_capacity()? Makes the reader realize something > weird is going on. Absolutely. Great idea. > > I still believe capacity really needs to be decided > > at the virtqueue level, not in the driver. > > E.g. with indirect each skb uses a single entry: freeing > > 1 small skb is always enough to have space for a large one. > > > > I do understand how it seems a waste to leave direct space > > in the ring while we might in practice have space > > due to indirect. Didn't come up with a nice way to > > solve this yet - but 'no worse than now :)' > > Agreed. > > > > > I just wanted to localize the 2+MAX_SKB_FRAGS logic that tries to make > > > > sure we have enough space in the buffer. Another way to do > > > > that is with a define :). > > > > > > To do this properly, we should really be using the actual number of sg > > > elements needed, but we'd have to do most of xmit_skb beforehand so we > > > know how many. > > > > > > Cheers, > > > Rusty. > > > > Maybe I'm confused here. The problem isn't the failing > > add_buf for the given skb IIUC. What we are trying to do here is stop > > the queue *before xmit_skb fails*. We can't look at the > > number of fragments in the current skb - the next one can be > > much larger. That's why we check capacity after xmit_skb, > > not before it, right? > > No, I was confused... More coffee! > > Thanks, > Rusty. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html