Re: DCCP work ideas

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi All,

See inline...

Tom P.

> -----Original Message-----
> From: dccp-bounces@xxxxxxxx [mailto:dccp-bounces@xxxxxxxx] On Behalf Of
> Arjuna Sathiaseelan
> Sent: Thursday, July 30, 2009 2:42 AM
> To: 'Bryan Ford'; Phelan, Tom
> Cc: gorry@xxxxxxxxxxxxxx; dccp@xxxxxxxx; 'Pasi Sarolahti'
> Subject: Re:  DCCP work ideas
> 
> 
> Dear Bryan,
> 
> Some thoughts here..Please see inline.
> 
> > 1. The CC algorithm should NOT increase the rate/window allowance
> > significantly beyond the amount of data actually being sent.  That is,
> > rate/window increase should always represent "real" bandwidth
> > availability information, not just an absence of congestion events due
> > to the app not using its allowance.
> 
> That’s the Congestion Window Validation (CWV) theory. I think we need to
> quantify how significant the rate/window grows when the data is not
> actually sent.  CCID-2 only increases the window by 1 segment per RTT
> (when in Congestion avoidance) and exponentially doubles during slow-
> start. However, when the app is not sending enough data (app-limited/idle
> for a RTO), I think CCID-2 would do CWV and limit itself. Similarly CCID-3
> (if it follows RFC 5348) during an app-limited/idle period would be
> limited by the maximum cached receiver rate (which means the rate would
> not grow) till the app starts sending data. I would like to give an
> example here:
> 
> Say app sends at 256kbps for a period of time -- but then it rate switches
> to 64kbps. CCID-3 (following RFC 5348) would consider this switch to be an
> app-limited case, and would set its maximum cached receiver rate to
> 256kbps) --> which means the sender can now send upto 256kbps (under the
> notion that it has previously sent this much). So now when the sender
> wants to retract back from 64 kbps to 256 kbps --> there is no problem
> here -- since CCID-3 would allow you to send (provided there was no
> negative feedback).
> 
[Tom P.] Well, this isn't a very interesting case -- the app has voluntarily switched to a lower rate?  The interesting case is when the app has been forced to switch to the lower rate, because of negative feedback, but now the negative feedback has stopped.  How do you tell the difference between just being well-balanced and new capacity is available?

I agree (with Arjuna's statement below) that padding is not an appetizing idea, but is there any other way to tell the difference between the two cases without offering up some sacrificial data?

At the DCCP layer, any sacrificial data would probably have to be padding, but at the app layer sacrificial data might not have to be completely padding.  A layered codec could start transmitting an additional layer.  The receiver could then use that layer if it arrives without loss (maybe waiting a few frames to be sure).

In both of these cases (DCCP padding and layering), it isn't possible to control which data actually gets sacrificed (the padding/extra layer or the base layer).  A way around that is for the transmitter (this could be at the DCCP layer) to add FEC to increase the bit rate but still enable recovery from losses.  Of course FEC recovery adds some delay.  Maybe all of this is too complicated.

> If CCID-3 implemented RFC 3448 (which has now been obsoleted) --> then you
> have the problem which you were talking about..Then you would need some
> way of faster restarting/probing to go to a higher rate..You could use
> mechanisms such as Faster Restart or Quick Start to achieve this..
> However, this depends on various factors such as RTT, transmit buffer and
> playout buffer sizes, and granularity of the media rates. You need not
> worry about streaming since you have adequate playout/transmit buffers to
> handle this (upto 20s worth of buffering)..What you need to worry is for
> the conversational class of traffic which has limited buffer requirements
> (400ms of buffering for video conferencing apps? Or upto 100 ms buffering
> for VoIP apps). We need to actually classify, what the current app media
> rate granularities are? If it switches twice the previous media rate --
> CCID-3 would handle i.e. CCID-3 would allow you to send twice (so you can
> support twice the previous sending rate here).
> 
> More than twice --> it would require a few RTTs. For smaller RTTs, the
> sender would be able to grow faster, so I don’t see a big problem here.
> The only problem is for larger RTTs (above 50ms or so).Then you may need
> mechanisms such as Faster Restart/Quick Start (Faster Restart would allow
> you to achieve 4 times the previous sending rate -- which means if I had
> sent 64kbps -- I could jump to 256 kbps -- and FR would support it iff I
> had sent 256 previously within the last few RTTs without any negative
> feedback).
> 
> 
> Some of my arguments may fail if the sender received frequent negative
> feedback (ECN marking/loss) (that may be the reason why the media rate
> switched from higher to lower rate in the first place) --> then the above
> mechanisms start behaving conservatively --> CCID-3 would be then limited
> by the throughput equation and you wont be able to send more than that!
> CCID-2 would do AIMD and probe to see if its getting positive feedback.
> But the app is also going to bite the bullet -- since if the transport is
> getting frequent negative feedback -- the transport may not be able to
> satisfy even the lowest media rate in some cases -- which means the app
> packets would end up getting discarded in the transmit buffer. So the app
> has to just wait to see if it can switch to a higher rate. The app may
> determine this by some signaling from the transport or by looking at the
> rate at which the transmit buffer is being emptied (????).
> 
> 
> > 3. Different approaches to probing are conceivable.  The simplest way
> > is for DCCP to insert pad data and/or extra packets, growing according
> > to the CC algorithm's additive increase function, until the bandwidth
> > artificially achieved via this padding reaches the app's request (a
> > positive result), or a congestion event occurs (a negative result,
> > cancelling the probe process and turning off the padding until the
> > next probe.  This will waste bandwidth, but only briefly once in a
> > while, and its bandwidth consumption will still be a lot less
> > aggressive over time than say a single TCP bulk data transfer.
> 
> I think padding is harmful and we should not use it :)
> 
> 
> Regards
> Arjuna
> 
> 
> >
> > > There are other mismatches as well.  CCID3 adjusts allowed rate on a
> > > continuous spectrum, which can cause problems for apps that can only
> > > make step adjustments, but it also makes those adjustments at what
> > > to the app look like arbitrary moments.  Many media apps can only
> > > make rate adjustments at frame boundaries.  What do they do with the
> > > data that's already encoded from the last frame when CCID3 makes an
> > > allowed rate change?  Note that typical frame rates and typical RTTs
> > > are rough-order-of-magnitude similar.  Would it really hurt to wait
> > > until the next frame to change the rate?  But of course there's no
> > > mechanism in CCID3 to support that.
> > >
> > > Some of these topics might be better for ICCRG, but the idea as I
> > > understand it is for DCCP and ICCRG to work together on these sorts
> > > of things.
> > >
> > > Tom P.
> > >
> > > ________________________________________
> > > From: dccp-bounces@xxxxxxxx [mailto:dccp-bounces@xxxxxxxx] On Behalf
> > > Of Arjuna Sathiaseelan
> > > Sent: Wednesday, July 29, 2009 12:55 PM
> > > To: 'Bryan Ford'; 'Pasi Sarolahti'; dccp@xxxxxxxx
> > > Cc: gorry@xxxxxxxxxxxxxx
> > > Subject: Re:  DCCP work ideas
> > >
> > >
> > > Dear Bryan,
> > >
> > >   DCCP’s CCID’s do probe for capacity for e.g. CCID 3 (which
> > > follows TFRC) would allow the sender to send upto twice the receiver
> > >  rate or that allowed by the throughput equation (which ever is smal
> > > l) and hence its upto the application to decide whether to retract b
> > > ack to its original higher media rate. CCID-2 would grow its cwnd li
> > > ke TCP would and hence probing occurs here too…
> > >
> > > The RFC-to-be draft Quickstart for DCCP – allows the use of QS with
> > > DCCP – and hence the sender could probe for additional capacity usin
> > > g QS – which in turn could be used by the app to decide whether to u
> > > se a higher media rate..
> > >
> > > So I believe that it’s an application/API problem rather than the tr
> > > ansport..
> > >
> > >
> > > Correct me if I am wrong ☺
> > >
> > >
> > > Regards
> > > Arjuna
> > >
> > > DCCP's congestion control will not try to probe for bandwidth and
> > > the application will never know when it can move back up to
> > > 128Kbps.  So solve this by developing an extension to DCCP's
> > > congestion control mechanisms an a corresopnding API allowing
> > > applications to maintain a standing "request" for more bandwidth
> > > than they're actually using at the moment, and to notify the
> > > application when the full amount of requested bandwidth appears to
> > > be available.  That should allow media applications to follow DCCP's
> > > congestion control decisions without giving up the control they need
> > > in order to utilize available bandwidth dynamically.  There are
> > > several alternative ways to achieve this at the congestion control
> > > level, at least one of which might even be reasonably safe and
> > > efficient; I'll try to write it up in a follow-on E-mail shortly.
> > >
> > > Thanks,
> > > Bryan
> > >
> 



[Index of Archives]     [Linux Kernel Development]     [Linux DCCP]     [IETF Annouce]     [Linux Networking]     [Git]     [Security]     [Linux Assembly]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [DDR & Rambus]

  Powered by Linux