Search Linux Wireless

Re: [RFC v2] mac80211: implement eBDP algorithm to fight bufferbloat

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Feb 21, 2011 at 10:47 AM, John W. Linville
<linville@xxxxxxxxxxxxx> wrote:
> On Fri, Feb 18, 2011 at 07:44:30PM -0800, Nathaniel Smith wrote:
>> On Fri, Feb 18, 2011 at 1:21 PM, John W. Linville
>> <linville@xxxxxxxxxxxxx> wrote:
>> > + Â Â Â /* grab timestamp info for buffer control estimates */
>> > + Â Â Â tserv = ktime_sub(ktime_get(), skb->tstamp);
>> [...]
>> > + Â Â Â Â Â Â Â ewma_add(&sta->sdata->qdata[q].tserv_ns_avg,
>> > + Â Â Â Â Â Â Â Â Â Â Â Âktime_to_ns(tserv));
>>
>> I think you're still measuring how long it takes one packet to get
>> from the end of the queue to the beginning, rather than measuring how
>> long it takes each packet to go out?
>
> Yes, I am measuring how long the driver+device takes to release each
> skb back to me (using that as a proxy for how long it takes to get
> the fragment to the next hop). ÂActually, FWIW I'm only measuring
> that time for those skb's that result in a tx status report.
>
> I tried to see how your measurement would be useful, but I just don't
> see how the number of frames ahead of me in the queue is relevant to
> the measured link latency? ÂI mean, I realize that having more packets
> ahead of me in the queue is likely to increase the latency for this
> frame, but I don't understand why I should use that information to
> discount the measured latency...?

It depends on which latency you want to measure. The way that I
reasoned was, suppose that at some given time, the card is able to
transmit 1 fragment every T nanoseconds. Then it can transmit n
fragments in n*T nanoseconds, so if we want the queue depth to be 2
ms, then we have
  n * T = 2 * NSEC_PER_MSEC
  n = 2 * NSEC_PER_MSEC / T

Which is the calculation that you're doing:

+                       sta->sdata->qdata[q].max_enqueued =
+                               max_t(int, 2, 2 * NSEC_PER_MSEC / tserv_ns_avg);

But for this calculation to make sense, we need T to be the time it
takes the card to transmit 1 fragment. In your patch, you're not
measuring that. You're measuring the total time between when a packet
is enqueued and when it is transmitted; if there were K packets in the
queue ahead of it, then this is the time to send *all* of them --
you're measuring (K+1)*T. That's why in my patch, I recorded the
current size of the queue when each packet is enqueued, so I could
compute T = total_time / (K+1).

Under saturation conditions, K+1 will always equal max_enqueued, so I
guess in your algorithm, at the steady state we have

  max_enqueued = K+1 = 2 * NSEC_PER_MSEC / ((K+1) * T)
  (K+1)^2 = 2 * NSEC_PER_MSEC / T
  K+1 = sqrt(2 * NSEC_PER_MSEC / T)

So I think under saturation, you converge to setting the queue to the
square root of the appropriate size?

-- Nathaniel
--
To unsubscribe from this list: send the line "unsubscribe linux-wireless" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Host AP]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Linux Kernel]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Device Mapper]
  Powered by Linux