Search Linux Wireless

Re: [RFC] mac80211: add AQL support for broadcast packets

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Felix Fietkau <nbd@xxxxxxxx> writes:

> On 12.02.24 11:56, Toke Høiland-Jørgensen wrote:
>> Johannes Berg <johannes@xxxxxxxxxxxxxxxx> writes:
>> 
>>> On Sat, 2024-02-10 at 17:18 +0100, Felix Fietkau wrote:
>>>> 
>>>> > > +++ b/include/net/cfg80211.h
>>>> > > @@ -3385,6 +3385,7 @@ enum wiphy_params_flags {
>>>> > >  /* The per TXQ device queue limit in airtime */
>>>> > >  #define IEEE80211_DEFAULT_AQL_TXQ_LIMIT_L	5000
>>>> > >  #define IEEE80211_DEFAULT_AQL_TXQ_LIMIT_H	12000
>>>> > > +#define IEEE80211_DEFAULT_AQL_TXQ_LIMIT_BC	50000
>>>> > 
>>>> > How did you arrive at the 50 ms figure for the limit on broadcast
>>>> > traffic? Seems like quite a lot? Did you experiment with different
>>>> > values?
>>>> 
>>>> Whenever a client is connected and in powersave mode, all multicast 
>>>> packets are buffered and sent after the beacon. Because of that I 
>>>> decided to use half of a default beacon interval.
>>>
>>> That makes some sense, I guess.
>> 
>> This implies that we will allow enough data to be queued up in the
>> hardware to spend half the next beacon interval just sending that
>> broadcast data? Isn't that a bit much if the goal is to prevent
>> broadcast from killing the network? What effect did you measure of this
>> patch? :)
>
> I didn't do any real measurements with this patch yet. How much 
> broadcast data is actually sent after the beacon is still up to the 
> driver/hardware, so depending on that, the limit might even be less than 
> 50ms. I also wanted to be conservative in limiting buffering in order to 
> avoid potential regressions. While 50ms may seem like much, I believe it 
> is still a significant improvement over the current state, which is 
> unlimited.
>
>> Also, as soon as something is actually transmitted, the kernel will
>> start pushing more data into the HW from the queue in the host. So the
>> HW queue limit shouldn't be set as "this is the maximum that should be
>> transmitted in one go", but rather "this is the minimum time we need for
>> the software stack to catch up and refill the queue before it runs
>> empty". So from that perspective 50ms also seems a bit high?
>
> When broadcast buffering is enabled, the driver/hardware typically 
> prepares the set of packets to be transmitted before the beacon is sent. 
> Any packet not ready by then will be sent in the next round.
> I added the 50ms limit based on that assumption.

Ah, so even if this is being done in software it's happening in the
driver, so post-TXQ dequeue? OK, in that case I guess it makes sense;
would love to see some numbers, of course, but I guess the debugfs
additions in this patch will make it possible to actually monitor the
queue lengths seen in the wild :)

>>> It does have me wondering though if we should also consider multicast
>>> for airtime fairness in some way?
>> 
>> Yeah, that would make sense. The virtual time-based scheduler that we
>> ended up reverting actually included airtime accounting for the
>> multicast queue as well. I don't recall if there was any problem with
>> that particular part of the change, or if it's just incidental that we
>> got rid of it as part of the revert. But it may be worth revisiting and
>> adding a similar mechanism to the round-robin scheduler...
>
> The round-robin scheduler already has some consideration of multicast - 
> it always puts the multicast queues last in the active_txqs list.

Ah, right. Hmm, not quite clear to me how that works out in terms of
fairness, but it should at least prevent the MC queue from blocking
everything else...

-Toke





[Index of Archives]     [Linux Host AP]     [ATH6KL]     [Linux Wireless Personal Area Network]     [Linux Bluetooth]     [Wireless Regulations]     [Linux Netdev]     [Kernel Newbies]     [Linux Kernel]     [IDE]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite Hiking]     [MIPS Linux]     [ARM Linux]     [Linux RAID]

  Powered by Linux