Hi Eddie, Sorry for the delay in responding. | What follows is a first cut at a solution. Any thoughts from others?? | | If t_ipi is used to schedule transmissions, then the following equation should | be applied each time the application is scheduled: | | t_ipi := max(t_ipi, t_now - RTT/2) | | This never lets t_ipi fall more than 1/2 RTT behind the current time. An | application is still allowed to send packets in a small burst after an idle | period, but the size of that burst is limited to RTT/2 worth of packets. | | RTT/2 was chosen because senders can send 2*last_receive_rate in any RTT. | | I am sure that this simple choice has disadvantages, such as little bursts at | the beginnings of idle periods. One could be more conservative and set e.g. | | t_ipi := max(t_ipi, t_now - t_gran). | | But I think RTT/2 might be OK. Implementation experience would be preferred. | | This issue is really an implementation issue. RFC3448 4.6 is not exactly | normative; it discusses one way to achieve a send rate, not a required | implementation. So in some sense the implementer is free to choose anything | reasonable. In TFRC t_ipi is always smaller than RTT, so RTT/2 is an upper bound. I think it makes sense (from an implementation standpoint) to use one full t_ipi as upper bound. This is similar to your solution in that both values are less than RTT, and both provide a means to stop large `packet storms'. The reason for chosing t_ipi is that the size of the large burst depends on the number of full t_ipi intervals that fit into the time interval that the receiver is lagging behind (can send detailed derivation). But, as said, both choices are similar. Gerrit