- X_inst is always calculated using MSS, as the spec says.
- t_ipi is calculated using whatever the app is using for the packet
size variable "s", as the spec says. This might be MSS.
Do you mean when X_Inst = W_init/R?
OK, I was wrong AND misunderstood you. Let me try harder.
RFC 4342 has this to say about initial sending rates.
Translating this to the packet-based congestion control of CCID 3,
the initial CCID 3 sending rate is allowed to be at least two packets
per RTT, and at most four packets per RTT, depending on the packet
size. The initial rate is only allowed to be three or four packets
per RTT when, in terms of segment size, that translates to at most
4380 bytes per RTT.
The formula you gave, min(4*MSS, max(2*MSS, 4380))/R, is not actually
recommended by RFC 4342. That equation comes from a discussion of what RFC
3390 contains; RFC 3390 is about TCP. The normative text above, which
actually defines what CCID3 senders should do, follows that formula, but does
not imply the formula, as you point out.
One implementation choice would be to set, as you recommend, W_init to
min(4*s, max(2*s, 4380)).
Here's how to alter the paragraph above to clarify this point.
Therefore, in contrast to [RFC3448], the initial CCID 3 sending rate
is allowed to be at least two packets per RTT, and at most four
packets per RTT, depending on the packet size. The initial rate is
only allowed to be three or four packets per RTT when, in terms of
segment size, that translates to at most 4380 bytes per RTT. This
might be implemented, for example, by setting the initial sending
rate to min(4*s, max(2*s, 4380 bytes)), where "s" as usual is the
packet size in bytes.
Will send an errata in once all this has settled down, clearly more
specificity would help here...
Eddie