virtualization-bounces@xxxxxxxxxxxxxxxxxxxxxxxxxx wrote: > On Wed, 2007-07-11 at 12:28 +0200, Christian Borntraeger wrote: >> Am Mittwoch, 4. Juli 2007 schrieb Rusty Russell: >>> +static void try_fill_recv(struct virtnet_info *vi) > >> Hmm, so it allocates skbs until oom or until add_buf fails, right? > > Yep. > >> Do you expect the add_buf call to fail if we have enough buffers? Who >> defines the amount of buffers we can add via add_buf? > > There will be some internal limit on how many buffers the > virtio implementation supports, but depends on that > implementation. It could be a number of buffers or a total number of > descriptors. > I think one of the key tradeoffs here is simplicity versus flexibility. A QP style interface is very explicit about the size of queues (requested and actual) and the size of queue entries (request maximum and actual maximum), current depths, etc. My reading of the proposed interface is that is it much much simpler than all that. This takes flexibility away from the application, that in theory *could* have adapted its behavior to different queue sizes, and gives it to the specific implmentation (that can now do things like variable-length work queue entries if it wants to). The one thing I'd recommend avoiding is starting with a simple interface and then tacking on one bell or whistle at a time. We should either keep it simple or shift to a QP style interface and lean on all the research and negotiations on what attributes are needed that has already taken place for RDMA related networking interfaces. It is also a pain for implementations to have to deal with multiple *detailed* queue interface requirements that all try to accomplish the same thing but insist on doing it differently. If things are vague for one interface you have the flexibility to re-use what another interface forced you to do. But unless there are *specific* consumers who say they want specific bells and whistles my hunch would be to stick with the simple interface. Which translates as "if you need to add a buffer, add it, let us worry about it, if you've gone too far we'll tell you." _______________________________________________ Virtualization mailing list Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linux-foundation.org/mailman/listinfo/virtualization