Dear Lars, Thanks for your reply :)..
> If you hadnt used dummy packets during > the initial slowstart, then I guess you would have got a large packet > loss rate! :).. Sorry, I don't understand - there were no dummy packets.
Ok what I meant was that during the initial slowstart, the sender has to ramp upto the encoding rate of the application. So if the application is sending packets during that ramping period, then the packets are going to get dropped since the transmit buffer size is only 5. So from what I read from Vlad's thesis, I saw that he has used idle data to achieve that appropriate rate before sending the real data..According to his thesis, " the initial connection establishment (for example through SIP messaging) the two parties exchange idle data in order to achieve an appropriate rate for beginning voice transmission." and I presume this paper was based on Vlad's thesis work :). But if he hadnt used idle data, then there are going to be loads of packet losses and considering a large delay network the performance is going to be really severe..
The configured packet loss rate that is a parameter of the experiment causes only random drops at the router. Additional packets may be dropped at the sender - independent of that parameter - if the apps sends faster than the current window/rate. I believe Vlad used a send buffer of 5 packets. If that's not in the paper, we definitely need to add it. I also believe that send buffer overflow was extremely rare, but Vlad would know better.
So I guess the subfactor I(e) hasnt considered the sender buffer drops? I guess this is a major factor that needs to be considered.
Thanks again for the feedback!
You are welcome :) -- Regards, Arjuna Postdoctoral Researcher Engineering Research Lab, Department of Engineering, University of Aberdeen