Re: [LARTC] SFQ buckets/extensions

Linux Advanced Routing and Traffic Control

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Alexander Atanasov writes:

 > > I don't see why that should be the case.  And I don't recall ever
 > > observing it.  This adaptation time should be practically zero.
 > > There's no work in making the queues equal.
 > 
 > 	When queues get full sfq have to drop packets and it must
 > send less(delay) or drop on the faster flows which gets a bigger queues to
 > slow down them. That's the work i mean. And it really depends on the
 > subflow depth.

If the queue is "full" (currently measured in number of packets
allowed) then a packet is dropped from the longest subqueue.
That does not necessarily equalize the subqueue lengths.  You
can still have queues of wildly differing lengths.

 > > (Let's use the word queue to mean the whole SFQ and subqueue for the
 > > part sharing a hash index.)
 > > If you have, say, 100 packets in one subqueue and 10 in another they're
 > > already sharing the bandwidth 50-50. 
 > 
 > 	No, that means that the flow with 100 packets is using about
 > 10 times more bandwidth. To be fair both must have lenght around 50.

Again, no.  It's perfectly possible for one queue to always have
length in range 99-101 and another to always have length 9-11 and
still be serving them both at the same rate.  Just imagine that one
client sends 100 packets at once and the other sends 10 at once and
then they each send one per second and we forward one per second.
Same rate (suppose all packets are same length), different queue
lengths.  In fact SFQ serves them at the same rate no matter how long
they are.  Note - serving in this case means sending, not receiving.

 > 	I agree that the goal is to serve the same bandwidth, but

Not only is this the goal, it's what SFQ does.

 > bandwidth is served with packets which carry a data of X bytes up to MTU,
 > so i consider packets as the base unit.

Actually, SFQ serves the same rate in bytes/sec, not in packets/sec.
If one flow uses only 1000byte packets and another only 100 byte
packets, the second will get 10 times as many packets/sec as the first.
It's only for measuring the length of the queue and subqueues that SFQ
counts packets.  It would be more accurate in some sense to measure
this in bytes as well.  But if you're using a lot of packets then you
really ought to use large ones.

 ( look at the example as both ftp 
 > flows send the same sized packets with the same data at the same rate).  
 > You have:
 > 	enqueue -> put in the subflow   - len++
 > 	dequeue -> get from the subflow - len--
 > 
 > So when dequeues ocure slower than enqueues they fill up - packets
 > are delayed and at some point get dropped. So the problem is when
 > your queues get full with packets - how to dequeue to be fair to flows
 > which are identified with the hash. SFQ tries to make equal queues for the
 > flows to do this.

No, it doesn't.  It does something like round robin on the subqueues
which serves them all at the same rate.  That's independent of how
long the subqueues get.  In fact, if you had infinite memory (and
could tolerate infinite delay) you could skip the drop from the tail
of the longest queue.  (You'd have to change the current algorithm
a little bit.  The stuff that supports the tail drop wouldn't work.)

 > > Queue length depends on how many packets each download is willing to
 > > send without an ack.  If one is willing to send 100 and the other is
 > > willing to send 10, then the subqueues will likely be length 100 and
 > > 10, but each will still get the same bandwidth.  Without window
 > 
 > 	That's true if you can send 110 packets and not exceed your
 > bandwidth , trouble comes when you can send 100 , how to deal with with
 > the 10 that exceed. 

SFQ is not trying to control incoming bandwidth, just outgoing.
The sending 100 packets above is some other machine, X, sending to the
machine with SFQ, Y.  For Y these packets are incoming.  We don't
control that.  We only control the rate at which Y forwards them.
So X sends 100 packets.  Y gets 100 packets.  If they all go into
our SFQ and that is limited to 10 packets/sec then those packets are
going to be in the queue for a while.  In the mean while, Z sends 10
packets to Y.  Now we have two subqueues.  In the next second, each 
will get to send 5 packets.  So now we have queues of length 95 and 5.

 > > scaling the max window size is 64K which is only about 45 packets,
 > > so it's not really normal to have 100 packets in a subqueue.
 > 
 > 	It's normal if you have congested links, it's not good

I don't think this is really related to congested links.
Well behaved (as in TCP) traffic should not be creating big backlogs
no matter how fast or slow or congested the links are.

 > in anyway :( But that's why qos is to avoid and handle this.
 > You have 45 packets per one flow but with 3 hash collisions you get above
 > the 128 packets limit and drop. esfq sloves this you can tune
 > number of subflows and their lengths.

I still don't see what esfq is doing that is supposed to help.
More subqueues does reduce the probablity of collisions, but this
seems unlikely to be a problem.  Limiting the size of a subqueue
below the size of the entire queue doesn't do any good as far as
I can see.

_______________________________________________
LARTC mailing list / LARTC@mailman.ds9a.nl
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/

[Index of Archives]     [LARTC Home Page]     [Netfilter]     [Netfilter Development]     [Network Development]     [Bugtraq]     [GCC Help]     [Yosemite News]     [Linux Kernel]     [Fedora Users]
  Powered by Linux