Re: Some comments on the draft of 3448/TFRC.bis (Feb 2007)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



If this is the perspective then it may be better to take the section
regarding accumulation of send credits out of the specification, and
concentrate on the purposes of congestion control instead.
Otherwise it mixes specification with implementation issues, which 
will only confuse people.

Quoting Sally Floyd:
|  > |  Gerrit -
|  > |
|  > |  I have revised draft-ietf-dccp-rfc3448bis-02b.txt
|  > |   
|  > ("http://www.icir.org/floyd/papers/draft-ietf-dccp-rfc3448bis 
|  > -02b.txt")
|  > |  to say the following:
|  > |
|  > |      However, the TFRC sender is not allowed to accumulate
|  > |       `credits' of more than max(t_ipi, t_gran) time units in packet
|  > |       scheduling, so the sender is not allowed to send arbitrary  
|  > bursts of
|  > |       packets after idle periods.
|  > |
|  > |  If you could read Section 4.7 on "Scheduling of Packet  
|  > Transmissions"
|  > |  and see what you think, that would be great.
|  
|  > Idle periods are not the only possible cause; timing inaccuracies and  
|  > slow sending
|  > rate achieve the same effect over time.
|  > Using max(t_ipi, t_gran) allows large bursts again. On Gigabit  
|  > networks,
|  > t_ipi = 100 usec (or less) is not unusual. If t_gran = 10ms, then send  
|  > credit
|  > builds up until t_gran is reached, which means that the sender can  
|  > always send bursts
|  > of up to 100 packets or more (t_gran/t_ipi) at once.
|  
|  Yep.  That was the point.  Consider Case 1 and Case 2 below:
|  
|  Case 1: a high-bandwidth flow, no congestion, MSS-sized segments.
|  
|  One problem is what to do when t_gran is 10 ms and t_ipi is 1 ms.
|  Assume that there is no congestion, and the application is using  
|  MSS-sized
|  segments.  Does the TFRC sender only get to send two packets every
|  10 ms?  Or does the TFRC sender get to send ten packets every 10 ms,
|  as needed to achieve its TFRC-allowed sending rate of 1000 packets
|  per second?
|  
|  (TCP in this case would send as many packets as allowed by cwnd
|  each time it got a chance to send, with TCP as currently standardized
|  (i.e., not limited by rate-based pacing or maxburst or some such).
|  So the above is no worse that the short-term burstiness of TCP
|  with a CPU-limited sender.
|  
|  
|  Case 2: a high-bandwidth flow, a transient shortage of CPU cycles
|  at the sender.
|  
|  What if the TFRC sender usually sends one packet each time
|  its turn at the CPU comes up, but there is a transient period
|  when the CPU is very busy with something else, and when
|  the TFRC sender's turn at the CPU comes around again,
|  it has a backlog and would like to send all K backlogged
|  packets.  Does it get to?  Or does it have to pace them out?
|  
|  
|  Of course, there are complications that are not addressed
|  in the language above:
|  
|  Case 3: a high-bandwidth flow, no congestion, small segments.
|  
|  We don't want to encourage a flow to continually sent many small
|  packets each time its turn at the CPU comes around, when it
|  could instead sent fewer large packets with essentially the
|  same one-way packet delay from the sending app to the
|  receiving app.  My language above doesn't address this.
|  
|  
|  Feedback?
|  
|  (I tend to work in the world of models and simulators, so
|  I could be missing something essential.  I am also cc-ing this
|  to Mark Handley, who I believe was the author of the
|  original text in RFC 3448.)
|  
|  - Sally
|  http://www.icir.org/floyd/
|  
|  
|  


[Index of Archives]     [Linux Kernel Development]     [Linux DCCP]     [IETF Annouce]     [Linux Networking]     [Git]     [Security]     [Linux Assembly]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [DDR & Rambus]

  Powered by Linux