On Wed, Jan 08, 2014 at 10:09:47AM -0800, Eric Dumazet wrote: > On Wed, 2014-01-08 at 19:21 +0200, Michael S. Tsirkin wrote: > > > Basically yes, we could start dropping packets immediately > > once GFP_ATOMIC allocations fail and repost the buffer to host, > > and hope memory is available by the time we get the next interrupt. > > > But we wanted host to have visibility into the fact that > > we are out of memory and packets are dropped, so we did not want to > > repost. > > bufferbloat alert :) > I guess you are saying we never need to signal host/device that we are out of memory, it's enough that packets are dropped? It seemed like a useful thing for hypervisor to know about on general principles, even though I don't think kvm uses this info at this point. > > If we don't repost how do we know memory is finally available? > > We went for a timer based workqueue thing. > > What do you suggest? > > > In normal networking land, when a host A sends frames to host B, > nothing prevents A to pause the traffic to B if B is dropping packets > under stress. > > A physical NIC do not use a workqueue to refill its RX queue but uses > the following strategy : > > 0) Pre filling of RX ring buffer with N frames. This can use GFP_KERNEL > allocations with all needed (sleep/retry/shout) logic... > 1) IRQ is handled. > 2) Can we allocate a new buffer (GFP_ATOMIC) ? > If yes, we accept the frame, > and post the new buffer for the 'next frame' > If no, we drop the frame and recycle the memory for next round. > Exactly, this is what I tried to describe in the part that you have snipped out - but this means queue is always full. Also, I wonder whether allocating before passing frame to the stack might slow us down a tiny bit e.g. if an application is polling this socket on another CPU. Maybe a slightly better strategy is to do the above when queue depth is running low. E.g. when queue is 3/4 empty, try allocating before giving frames to net core, and recycle buffers on error. Not sure how much of a win this is. -- MST _______________________________________________ Virtualization mailing list Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linuxfoundation.org/mailman/listinfo/virtualization