On Tue, May 24, 2011 at 06:20:35PM +0530, Krishna Kumar2 wrote: > "Michael S. Tsirkin" <mst@xxxxxxxxxx> wrote on 05/24/2011 04:59:39 PM: > > > > > > Maybe Rusty means it is a simpler model to free the amount > > > > > of space that this xmit needs. We will still fail anyway > > > > > at some time but it is unlikely, since earlier iteration > > > > > freed up atleast the space that it was going to use. > > > > > > > > Not sure I nderstand. We can't know space is freed in the previous > > > > iteration as buffers might not have been used by then. > > > > > > Yes, the first few iterations may not have freed up space, but > > > later ones should. The amount of free space should increase > > > from then on, especially since we try to free double of what > > > we consume. > > > > Hmm. This is only an upper limit on the # of entries in the queue. > > Assume that vq size is 4 and we transmit 4 enties without > > getting anything in the used ring. The next transmit will fail. > > > > So I don't really see why it's unlikely that we reach the packet > > drop code with your patch. > > I was assuming 256 entries :) I will try to get some > numbers to see how often it is true tomorrow. That would depend on how fast the hypervisor is. Try doing something to make hypervisor slower than the guest. I don't think we need measurements to realize that with the host being slower than guest that would happen a lot, though. > > > I am not sure of why it was changed, since returning TX_BUSY > > > seems more efficient IMHO. > > > qdisc_restart() handles requeue'd > > > packets much better than a stopped queue, as a significant > > > part of this code is skipped if gso_skb is present > > > > I think this is the argument: > > http://www.mail-archive.com/virtualization@xxxxxxxxxxxx > > foundation.org/msg06364.html > > Thanks for digging up that thread! Yes, that one skb would get > sent first ahead of possibly higher priority skbs. However, > from a performance point, TX_BUSY code skips a lot of checks > and code for all subsequent packets till the device is > restarted. I can test performance with both cases and report > what I find (the requeue code has become very simple and clean > from "horribly complex", thanks to Herbert and Dave). Cc Herbert, and try to convince him :) > > > (qdisc > > > will eventually start dropping packets when tx_queue_len is > > > > tx_queue_len is a pretty large buffer so maybe no. > > I remember seeing tons of drops (pfifo_fast_enqueue) when > xmit returns TX_BUSY. > > > I think the packet drops from the scheduler queue can also be > > done intelligently (e.g. with CHOKe) which should > > work better than dropping a random packet? > > I am not sure of that - choke_enqueue checks against a random > skb to drop current skb, and also during congestion. But for > my "sample driver xmit", returning TX_BUSY could still allow > to be used with CHOKe. > > thanks, > > - KK _______________________________________________ Virtualization mailing list Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linux-foundation.org/mailman/listinfo/virtualization