Re: DCCP work ideas

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Tom, Arjuna,

On the step-function rate adjustment problem...

Flow control for apps with sending rates that are step functions is an interesting nut. As Arjuna says, TFRC (CCID3) does allow the sending rate cap to grow to twice the current sending rate without demand from the app. And CCID2 also allows the window to grow without demand (to some cap too, I believe). So as long as your app's step sizes are less than a factor of 2, it could return to the higher rate if there was some API support to indicate the higher rate was available.

There are other problems with returning to a higher rate, though. If you've been transmitting at X for a while, sure, CCID3 will allow you to double that, but it has no knowledge of whether that's really possible or not (has capacity become available or are things just well-balanced as they are?). You could return to the higher rate only to be forced immediately back to the lower rate. And the psychological impression of varying quality is worse than consistently low quality.

Yes, exactly - DCCP should not tell the application that more bandwidth is available if it doesn't have any real information to that effect; that will just cause bad oscillations exactly as you say. My idea in a nutshell is:

1. The CC algorithm should NOT increase the rate/window allowance significantly beyond the amount of data actually being sent. That is, rate/window increase should always represent "real" bandwidth availability information, not just an absence of congestion events due to the app not using its allowance.

2. Instead, when the application's standing request is greater than its actual usage for some time, DCCP (transparently to the application) probes at randomly jittered periodic times to see if the requested bandwidth is actually available. Only if it is confirmed to be available this way, DCCP notifies the app.

3. Different approaches to probing are conceivable. The simplest way is for DCCP to insert pad data and/or extra packets, growing according to the CC algorithm's additive increase function, until the bandwidth artificially achieved via this padding reaches the app's request (a positive result), or a congestion event occurs (a negative result, cancelling the probe process and turning off the padding until the next probe. This will waste bandwidth, but only briefly once in a while, and its bandwidth consumption will still be a lot less aggressive over time than say a single TCP bulk data transfer.

4. The delay between probes might start small but increase multiplicatively with each unsuccessful probe up to some maximum, so if the channel is consistently limited to less than the app's request, the bandwidth wastage due to probes tapers off over time to some negligible amount.

5. More efficient probe techniques could be considered, e.g., from the extensive literature on bandwidth probing by measuring inter-packet delays, if we ever get enough confidence that some such algorithm actually works reliably and is "safe". But if that doesn't happen, even the basic probing sheme above should "obviously" work, is "obviously" TCP-friendly by construction, and shouldn't be too wasteful given reasonable and adaptive probe periods.

Cheers,
Bryan

There are other mismatches as well. CCID3 adjusts allowed rate on a continuous spectrum, which can cause problems for apps that can only make step adjustments, but it also makes those adjustments at what to the app look like arbitrary moments. Many media apps can only make rate adjustments at frame boundaries. What do they do with the data that's already encoded from the last frame when CCID3 makes an allowed rate change? Note that typical frame rates and typical RTTs are rough-order-of-magnitude similar. Would it really hurt to wait until the next frame to change the rate? But of course there's no mechanism in CCID3 to support that.

Some of these topics might be better for ICCRG, but the idea as I understand it is for DCCP and ICCRG to work together on these sorts of things.

Tom P.

________________________________________
From: dccp-bounces@xxxxxxxx [mailto:dccp-bounces@xxxxxxxx] On Behalf Of Arjuna Sathiaseelan
Sent: Wednesday, July 29, 2009 12:55 PM
To: 'Bryan Ford'; 'Pasi Sarolahti'; dccp@xxxxxxxx
Cc: gorry@xxxxxxxxxxxxxx
Subject: Re:  DCCP work ideas


Dear Bryan,

DCCP’s CCID’s do probe for capacity for e.g. CCID 3 (which follows TFRC) would allow the sender to send upto twice the receiver rate or that allowed by the throughput equation (which ever is smal l) and hence its upto the application to decide whether to retract b ack to its original higher media rate. CCID-2 would grow its cwnd li ke TCP would and hence probing occurs here too…

The RFC-to-be draft Quickstart for DCCP – allows the use of QS with DCCP – and hence the sender could probe for additional capacity usin g QS – which in turn could be used by the app to decide whether to u se a higher media rate..

So I believe that it’s an application/API problem rather than the tr ansport..


Correct me if I am wrong ☺


Regards
Arjuna

DCCP's congestion control will not try to probe for bandwidth and the application will never know when it can move back up to 128Kbps. So solve this by developing an extension to DCCP's congestion control mechanisms an a corresopnding API allowing applications to maintain a standing "request" for more bandwidth than they're actually using at the moment, and to notify the application when the full amount of requested bandwidth appears to be available. That should allow media applications to follow DCCP's congestion control decisions without giving up the control they need in order to utilize available bandwidth dynamically. There are several alternative ways to achieve this at the congestion control level, at least one of which might even be reasonably safe and efficient; I'll try to write it up in a follow-on E-mail shortly.

Thanks,
Bryan



[Index of Archives]     [Linux Kernel Development]     [Linux DCCP]     [IETF Annouce]     [Linux Networking]     [Git]     [Security]     [Linux Assembly]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [DDR & Rambus]

  Powered by Linux