On Thu, 6 Jun 2002, Don Cohen wrote: > Alexander Atanasov writes: > > > > I don't see why that should be the case. And I don't recall ever > > > observing it. This adaptation time should be practically zero. > > > There's no work in making the queues equal. > > > > When queues get full sfq have to drop packets and it must > > send less(delay) or drop on the faster flows which gets a bigger queues to > > slow down them. That's the work i mean. And it really depends on the > > subflow depth. > > If the queue is "full" (currently measured in number of packets > allowed) then a packet is dropped from the longest subqueue. > That does not necessarily equalize the subqueue lengths. You > can still have queues of wildly differing lengths. In a long term always droping from the largest subqueue gives you equal subqueues. > > > > (Let's use the word queue to mean the whole SFQ and subqueue for the > > > part sharing a hash index.) > > > If you have, say, 100 packets in one subqueue and 10 in another they're > > > already sharing the bandwidth 50-50. > > > > No, that means that the flow with 100 packets is using about > > 10 times more bandwidth. To be fair both must have lenght around 50. > > Again, no. It's perfectly possible for one queue to always have > length in range 99-101 and another to always have length 9-11 and > still be serving them both at the same rate. Just imagine that one > client sends 100 packets at once and the other sends 10 at once and > then they each send one per second and we forward one per second. > Same rate (suppose all packets are same length), different queue > lengths. In fact SFQ serves them at the same rate no matter how long > they are. Note - serving in this case means sending, not receiving. Okay, we said that sfq dropes from the longest subqueue. We start with: q1 - 100 q2 - 10 we enqueue(recive) 2 packets/sec and dequeue(send) 1 packet/sec. ( rr from q1 and q2 ). q1 will grow to the limit first so sfq will start to drop there, it will be the longest queue until, q2 gets equal to q1. In time things should go like this: After 56 seconds ( if limit is 128 packets ) q1 hits the limit and sfq starts to drop. Now q2 is just with 38 packets and grows until it gets equal to q1, then start to drop round robin. I agree that rate is the same .5 packet/sec. But the initial 100 packets are burst which sfq delays (buffers) since it fits fine in queue limit. If q1 recives again 100 packets most of them get will dropped. > > > I agree that the goal is to serve the same bandwidth, but > > Not only is this the goal, it's what SFQ does. > > > bandwidth is served with packets which carry a data of X bytes up to MTU, > > so i consider packets as the base unit. > > Actually, SFQ serves the same rate in bytes/sec, not in packets/sec. > If one flow uses only 1000byte packets and another only 100 byte > packets, the second will get 10 times as many packets/sec as the first. > It's only for measuring the length of the queue and subqueues that SFQ > counts packets. It would be more accurate in some sense to measure > this in bytes as well. But if you're using a lot of packets then you > really ought to use large ones. I've not said that it servers packets/sec. But said that the packets i speak of are equal in size, so rate=packet/sec=bytes/sec. It powers a queue to send allotment bytes, not packets. But when it have allotment = 1 and packet with len = 800 the packet is send and it's powered again. so it sends more than it should be. Measurement in bytes can make this more incorrect since you have enqueued packets with different lenghts which you can not split a part and dequeue in bytes (sigh wouldn't it be nice? ), counting queues in packets (exactly what they have) is better. May be doing it like the cbq with average packet size can gave us better results. [cut] > > > > Queue length depends on how many packets each download is willing to > > > send without an ack. If one is willing to send 100 and the other is > > > willing to send 10, then the subqueues will likely be length 100 and > > > 10, but each will still get the same bandwidth. Without window > > > > That's true if you can send 110 packets and not exceed your > > bandwidth , trouble comes when you can send 100 , how to deal with with > > the 10 that exceed. > > SFQ is not trying to control incoming bandwidth, just outgoing. > The sending 100 packets above is some other machine, X, sending to the > machine with SFQ, Y. For Y these packets are incoming. We don't > control that. We only control the rate at which Y forwards them. > So X sends 100 packets. Y gets 100 packets. If they all go into > our SFQ and that is limited to 10 packets/sec then those packets are > going to be in the queue for a while. In the mean while, Z sends 10 > packets to Y. Now we have two subqueues. In the next second, each > will get to send 5 packets. So now we have queues of length 95 and 5. Sending from the queue, not controling incoming bandwidht. Yes look above. > > > > scaling the max window size is 64K which is only about 45 packets, > > > so it's not really normal to have 100 packets in a subqueue. > > > > It's normal if you have congested links, it's not good > > I don't think this is really related to congested links. > Well behaved (as in TCP) traffic should not be creating big backlogs > no matter how fast or slow or congested the links are. All started with the so called Download Accelerators, which do paralel gets and make TCP behave bad. In normal case TCP adjusts fast and does not create backlogs. But when you have to change it's bandwidth , you have to create a backlog to slow it down. It then clears fast. > > > in anyway :( But that's why qos is to avoid and handle this. > > You have 45 packets per one flow but with 3 hash collisions you get above > > the 128 packets limit and drop. esfq sloves this you can tune > > number of subflows and their lengths. > > I still don't see what esfq is doing that is supposed to help. > More subqueues does reduce the probablity of collisions, but this > seems unlikely to be a problem. Limiting the size of a subqueue > below the size of the entire queue doesn't do any good as far as > I can see. Collisions are problem belive me, that's why there is a perturbation to reduce them. You can arrange link sharing when controling number of subqueues more easy. Limiting may become good if made dynamic. People wanted options to tune hashsize/limits/depths and the most wanted (needed) was the option to select hash type. SRC/DST Port hash is in TODO now. Searching gives answers like "Patch SFQ", but that just doesn't worked for me.i need to have different sfqs on different classes/interfaces. So that's why there is an esfq. And i hope it really helps. -- have fun, alex _______________________________________________ LARTC mailing list / LARTC@mailman.ds9a.nl http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/