Search Linux Wireless

Re: BQL crap and wireless

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 08/30/2011 05:47 PM, Andrew McGregor wrote:
> On 31/08/2011, at 1:58 AM, Jim Gettys wrote:
>
>> On 08/29/2011 11:42 PM, Adrian Chadd wrote:
>>> On 30 August 2011 11:34, Tom Herbert <therbert@xxxxxxxxxx> wrote:
>>>
>>> C(P) is going to be quite variable - a full frame retransmit of a 4ms
>>> long aggregate frame is SUM(exponential backoff, grab the air,
>>> preamble, header, 4ms, etc. for each pass.)
>>>
>> It's not clear to me that doing heroic measures to compute the cost is
>> going to be worthwhile due to the rate at which the costs can change on
>> wireless; just getting into the rough ballpark may be enough. But
>> buffering algorithms and AQM algorithms are going to need an estimate of
>> the *time* it will take to transmit data, more than # of bytes or packets.
> That's not heroic measures; mac80211 needs all the code to calculate these times anyway, it's just a matter of collecting together some things we already know and calling the right function.
>
>

Fine; if it's easy, accurate is better (presuming the costs get
recalculated when circumstances change). We also will need the amount of
data being transmitted; it is the rate of transmission (the rate at
which the buffers are draining) that we'll likely need.

Here's what I've gleaned from reading "RED in a different light",  Van
Jacobson's Mitre talk and several conversations with Kathleen Nichols
and Van: AQM algorithms that can handle variable bandwidth environments
will need to be able to know the rate at which buffers empty.  It's the
direction they are going with their experiments on a "RED light" algorithm.

The fundamental insight as to why classic RED can't work in the wireless
environment is that the instantaneous queue length has little actual
information in it. Classic RED is tuned using the queue length as its
basic parameter.  Their belief as to algorithms that will work is that
the need to keep track of the running queue length *minimum over time*;
you want to keep the buffers small on a longer term basis (so they both
can absorb transients, which is their reason for existence, while
keeping the latency typically low).  The additional major challenge we
face that core routers do not is the highly variable traffic of mixed
mice and elephants.  What actually works only time will tell.

So in an environment in which the rate of transmission is highly
variable, such as wireless, or even possibly modern broadband with
PowerBoost, classic RED or similar algorithms that do not take the
buffer drain rate cannot possibly hack it properly.
                        - Jim


--
To unsubscribe from this list: send the line "unsubscribe linux-wireless" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Host AP]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Linux Kernel]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Device Mapper]
  Powered by Linux