The intent is, I think, that one follow the TFRC Average Loss Interval
calculation, from rfc3448 section 5.4, assigning n to the minimum of 8 and the
number of loss intervals so far observed. Thus, if only one loss interval has
been observed, n = 1 and the calculation devolves to I_mean = I_0 (the first
loss interval).
Eddie
Soo-Hyun Choi wrote:
Hi,
In general, if the number of loss history is less than 8, then it
would be more appropriate to consider the only available history when
you calculate the loss event rate.
But, even if we use 8 loss intervals to get the loss event rate having
only two of them are meaningful, it should not harm much for the
overall performance, which means it should not do badly in the steady
state environment.
If you only use the available (meaningful) histories to get the loss
event rate, then the protocol's adapting time to the steady state
would be decreased in time. Other than that, the performance of the
protocol is expected to remain same.
Soo-Hyun
P.S.
If you have a very high speed link and a very large bottleneck queue
with only small number of low speed application traffic, then you
probably want to consider the available history information when you
calculate the loss event rate.