Search Linux Wireless

Re: [ath5k-devel] ath5k: Weird Retransmission Behaviour

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Dec 6, 2010 at 10:36 AM, Nick Kossifidis <mickflemm@xxxxxxxxx> wrote:
> 2010/12/6 Bruno Randolf <br1@xxxxxxxxxxx>:
>> On Mon December 6 2010 15:30:00 Jonathan Guerin wrote:
>>> Hi,
>>>
>>>
>>> I've been doing some investigation into the behaviour of contention
>>> windows and retransmissions.
>>>
>>> Firstly, I'll just describe the test scenario and setup that I have. I
>>> have 3 Via x86 nodes with Atheros AR5001X+ cards. They are tethered to
>>> each other via coaxial cables, into splitters. They have 20dB of fixed
>>> attenuation applied to each antenna output, plus a programmable
>>> variable attenuator on each link. One node acts as a sender, one as a
>>> receiver, and one simply runs a monitor-mode interface to capture
>>> packet traces. All 3 are running kernel version 2.6.37-rc2. The sender
>>> and receiver are configured as IBSS stations and are tuned to 5.18
>>> GHz.
>>>
>>> Here's a really dodgy ASCII diagram of the setup:
>>>
>>> S-----[variable attenuator]-----R
>>>
>>>
>>>
>>> +------------M-------------------------+
>>>
>>> where S is the Sender node, R is the Receiver node and M is the
>>> Monitoring capture node.
>>>
>>>
>>> Secondly, I have written a program which will parse a captured pcap
>>> file from the Monitoring station. It looks for 'chains' of frames with
>>> the same sequence number, and where the first frame has the Retry bit
>>> set to false in the header and all following have it set to true. Any
>>> deviation from this, and the program drops the current chain without
>>> including it in its stats, and looks for the next chain matching these
>>> requirements. It averages the amount of time per transmission number
>>> (i.e. the average of all transmissions which were the first, second,
>>> third etc. for a unique sequence number). The transmission time of a
>>> frame is the amount of time between the end of the frame and the end
>>> of the previous. It tracks these 'chains' of frames with the same
>>> sequence number. It considers the last transmission number in each
>>> chain as the 'final' transmission.
>>>
>>> Finally, the link is loaded using a saturated UDP flow, and the data
>>> rate is fixed to 54M and 36M. This is specified in the output. The
>>> output is attached below.
>>>
>>> The output describes the fixed link data rate, the variable
>>> attenuator's value, the delivery ratio, and the number of transmitted
>>> packets/s. I've added a discussion per result set. Each line outputs
>>> the transmission number, the average transmission time for this
>>> number, the total number of transmissions, the number of frames which
>>> ended their transmissions at this number (i.e. where the chain ended
>>> its final transmission - this is equivalent to the retransmission
>>> value from the Radiotap header + 1), and the average expected
>>> transmission time for all that particular transmission number in all
>>> chains. This is calculated using the airtime calculations from the
>>> 802.11a standard, with the receipt of an ACK frame, as well as a SIFS
>>> (16us), which is 28us. If the transmission did not receive an ACK, a
>>> normal ACK timeout is 50 us, but ath5k appears to have this set to 25
>>> us, so the value shouldn't be too far out what to expect.
>>>
>>> The header to each result refers to the rate it was fixed at, as well
>>> as the variable attenuation being added to it. The link also has a
>>> fixed 40dB of attenuation both to protect the cards, as well as give
>>> the necessary range for the variable attenuator to control link
>>> quality.
>>>
>>> ==> iperf_33M_rate_36M_att_1dB.pcap.txt <== (good link, 100% delivery)
>>> Average time per TX No:
>>> TXNo ÂAvg           No       ÂFinal      ExpectedAvg
>>> 1 Â Â Â Â Â Â 477.604980 Â Â Â10463 Â 10462 Â Â Â Â Â 509
>>> Overall average: 477.604980
>>>
>>> [Discussion:] Nothing, appears normal.
>>>
>>>
>>> ==> iperf_33M_rate_36M_att_18dB.pcap.txt <== (lossy link, but still
>>> 100% delivery)
>>> Average time per TX No:
>>> TXNo ÂAvg           No       ÂFinal      ExpectedAvg
>>> 1 Â Â Â Â Â Â 476.966766 Â Â Â9808 Â Â Â Â Â Â8138 Â Â509
>>> 2 Â Â Â Â Â Â 550.320496 Â Â Â1663 Â Â Â Â Â Â1403 Â Â581
>>> 3 Â Â Â Â Â Â 697.552917 Â Â Â255 Â Â Â Â Â Â 218 Â Â 725
>>> 4 Â Â Â Â Â Â 1028.756714 Â Â 37 Â Â Â Â Â Â Â30 Â Â Â Â Â Â Â1013
>>> 5 Â Â Â Â Â Â 1603.428589 Â Â 7 Â Â Â Â Â Â Â 7 Â Â Â Â Â Â Â 1589
>>> Overall average: 494.514618
>>>
>>> [Discussion:] Nothing, appears normal. Contention window appears to
>>> double normally.
>>>
>>> ==> iperf_33M_rate_36M_att_19dB.pcap.txt <== (lossy link, but still
>>> 100% delivery)
>>> Average time per TX No:
>>> TXNo ÂAvg           No       ÂFinal      ExpectedAvg
>>> 1 Â Â Â Â Â Â 477.510437 Â Â Â14893 Â 8653 Â Â509
>>> 2 Â Â Â Â Â Â 546.149048 Â Â Â6205 Â Â Â Â Â Â3624 Â Â581
>>> 3 Â Â Â Â Â Â 692.270203 Â Â Â2561 Â Â Â Â Â Â1552 Â Â725
>>> 4 Â Â Â Â Â Â 980.565857 Â Â Â1002 Â Â Â Â Â Â596 Â Â 1013
>>> 5 Â Â Â Â Â Â 1542.079956 Â Â 400 Â Â Â Â Â Â 252 Â Â 1589
>>> 6 Â Â Â Â Â Â 2758.693848 Â Â 147 Â Â Â Â Â Â 89 Â Â Â Â Â Â Â2741
>>> 7 Â Â Â Â Â Â 4971.500000 Â Â 56 Â Â Â Â Â Â Â32 Â Â Â Â Â Â Â5045
>>> 8 Â Â Â Â Â Â 4689.043457 Â Â 23 Â Â Â Â Â Â Â15 Â Â Â Â Â Â Â5045
>>> 9 Â Â Â Â Â Â 4487.856934 Â Â 7 Â Â Â Â Â Â Â 3 Â Â Â Â Â Â Â 5045
>>> 10 Â Â Â Â Â Â442.250000 Â Â Â4 Â Â Â Â Â Â Â 3 Â Â Â Â Â Â Â 5045
>>> 11 Â Â Â Â Â Â488.000000 Â Â Â1 Â Â Â Â Â Â Â 1 Â Â Â Â Â Â Â 5045
>>> Overall average: 580.976807
>>>
>>> [Discussion:] Contention window appears to double until a plateau from
>>> 7 through 9. Weirdly, the contention window appears to be drop again
>>> from 10, but
>>> there are too few frames to draw a conclusion.
>>>
>>> ==> iperf_33M_rate_36M_att_21dB.pcap.txt <== (lossy link, < 1% delivery)
>>> TXNo ÂAvg           No       ÂFinal  ExpectedAvg
>>> 1 Â Â Â Â Â Â 485.390198 Â Â Â1940 Â Â Â Â Â Â3 Â Â Â Â Â509
>>> 2 Â Â Â Â Â Â 479.113434 Â Â Â1922 Â Â Â Â Â Â2 Â Â Â Â Â581
>>> 3 Â Â Â Â Â Â 479.681824 Â Â Â1914 Â Â Â Â Â Â0 Â Â Â Â Â725
>>> 4 Â Â Â Â Â Â 485.083038 Â Â Â1903 Â Â Â Â Â Â1 Â Â Â Â Â1013
>>> 5 Â Â Â Â Â Â 492.088135 Â Â Â1895 Â Â Â Â Â Â4 Â Â Â Â Â1589
>>> 6 Â Â Â Â Â Â 508.322510 Â Â Â1876 Â Â Â Â Â Â1 Â Â Â Â Â2741
>>> 7 Â Â Â Â Â Â 524.697876 Â Â Â1870 Â Â Â Â Â Â1 Â Â Â Â Â5045
>>> 8 Â Â Â Â Â Â 543.054382 Â Â Â1857 Â Â Â Â Â Â0 Â Â Â Â Â5045
>>> 9 Â Â Â Â Â Â 522.970703 Â Â Â1842 Â Â Â Â Â Â0 Â Â Â Â Â5045
>>> 10 Â Â Â Â Â Â478.204132 Â Â Â1837 Â Â Â Â Â Â0 Â Â Â Â Â5045
>>> 11 Â Â Â Â Â Â476.520782 Â Â Â1828 Â Â Â Â Â Â0 Â Â Â Â Â5045
>>> 12 Â Â Â Â Â Â477.531342 Â Â Â1818 Â Â Â Â Â Â0 Â Â Â Â Â5045
>>> 13 Â Â Â Â Â Â476.743652 Â Â Â1810 Â Â Â Â Â Â0 Â Â Â Â Â5045
>>> 14 Â Â Â Â Â Â478.936554 Â Â Â1797 Â Â Â Â Â Â0 Â Â Â Â Â5045
>>> 15 Â Â Â Â Â Â480.699097 Â Â Â1788 Â Â Â Â Â Â0 Â Â Â Â Â5045
>>> 16 Â Â Â Â Â Â482.734314 Â Â Â1784 Â Â Â Â Â Â0 Â Â Â Â Â5045
>>> 17 Â Â Â Â Â Â491.608459 Â Â Â1775 Â Â Â Â Â Â0 Â Â Â Â Â5045
>>> 18 Â Â Â Â Â Â497.458984 Â Â Â1767 Â Â Â Â Â Â1 Â Â Â Â Â5045
>>> 19 Â Â Â Â Â Â495.067932 Â Â Â1752 Â Â Â Â Â Â7 Â Â Â Â Â5045
>>> 20 Â Â Â Â Â Â478.102417 Â Â Â1738 Â Â Â Â Â Â295 Â Â 5045
>>> 21 Â Â Â Â Â Â475.128845 Â Â Â1436 Â Â Â Â Â Â1402 Â 5045
>>> 22 Â Â Â Â Â Â492.692322 Â Â Â26 Â Â Â Â Â Â Â0 Â Â Â Â Â5045
>>> 23 Â Â Â Â Â Â471.576935 Â Â Â26 Â Â Â Â Â Â Â0 Â Â Â Â Â5045
>>> 24 Â Â Â Â Â Â466.884613 Â Â Â26 Â Â Â Â Â Â Â0 Â Â Â Â Â5045
>>> 25 Â Â Â Â Â Â476.269226 Â Â Â26 Â Â Â Â Â Â Â0 Â Â Â Â Â5045
>>> 26 Â Â Â Â Â Â462.192322 Â Â Â26 Â Â Â Â Â Â Â0 Â Â Â Â Â5045
>>> 27 Â Â Â Â Â Â480.961548 Â Â Â26 Â Â Â Â Â Â Â1 Â Â Â Â Â5045
>>> 28 Â Â Â Â Â Â463.600006 Â Â Â25 Â Â Â Â Â Â Â24 Â Â Â Â 5045
>>> Overall average: 491.068359
>>>
>>> [Discussion:] Contention does not appear to increase, and the number
>>> of transmission per frame is very large. This behaviour is replicated
>>> with the 54M scenario when a link is extremely lossy.
>>>
>>> ==> iperf_33M_rate_54M_att_1dB.pcap.txt <== (good link, 2400 packets/s)
>>> Average time per TX No:
>>> TXNo ÂAvg           No       ÂFinal      ExpectedAverage
>>> 1 Â Â Â Â Â Â 365.551849 Â Â Â23957 Â 23935 Â Â Â Â Â 393
>>> 2 Â Â Â Â Â Â 409.571442 Â Â Â21 Â Â Â Â Â Â Â21 Â Â Â Â Â Â Â465
>>> Overall average: 365.590424
>>>
>>> [Discussion: ] Appears relatively normal.
>>>
>>> ==> iperf_33M_rate_54M_att_10dB.pcap.txt <== (lossy link, but still
>>> 100% delivery, 1500 packets/s)
>>> Average time per TX No:
>>> TXNo ÂAvg           No       ÂFinal      ExpectedAverage
>>> 1 Â Â Â Â Â Â 364.501190 Â Â Â10134 Â 5915 Â Â393
>>> 2 Â Â Â Â Â Â 434.138000 Â Â Â4196 Â Â Â Â Â Â2461 Â Â465
>>> 3 Â Â Â Â Â Â 579.482300 Â Â Â1721 Â Â Â Â Â Â1036 Â Â609
>>> 4 Â Â Â Â Â Â 837.005859 Â Â Â682 Â Â Â Â Â Â 397 Â Â 897
>>> 5 Â Â Â Â Â Â 1365.279175 Â Â 283 Â Â Â Â Â Â 155 Â Â 1473
>>> 6 Â Â Â Â Â Â 2572.007812 Â Â 128 Â Â Â Â Â Â 81 Â Â Â Â Â Â Â2625
>>> 7 Â Â Â Â Â Â 4905.195801 Â Â 46 Â Â Â Â Â Â Â27 Â Â Â Â Â Â Â4929
>>> 8 Â Â Â Â Â Â 4985.947266 Â Â 19 Â Â Â Â Â Â Â12 Â Â Â Â Â Â Â4929
>>> 9 Â Â Â Â Â Â 4627.285645 Â Â 7 Â Â Â Â Â Â Â 4 Â Â Â Â Â Â Â 4929
>>> 10 Â Â Â Â Â Â366.000000 Â Â Â3 Â Â Â Â Â Â Â 1 Â Â Â Â Â Â Â 4929
>>> 11 Â Â Â Â Â Â335.500000 Â Â Â2 Â Â Â Â Â Â Â 2 Â Â Â Â Â Â Â 4929
>>> Overall average: 473.477020
>>>
>>> [Discussion: ] Appears fine, until transmission 10, which appears to
>>> drop the contention window back to an equivalent first transmission
>>> value, but not enough frames at this point to draw a conclusion.
>>>
>>> ==> iperf_33M_rate_54M_att_11dB.pcap.txt <== (lossy link, but still
>>> 100% delivery, 680 packets/s)
>>> Average time per TX No:
>>> TXNo ÂAvg           No       ÂFinal      ExpectedAverage
>>> 1 Â Â Â Â Â Â 362.082825 Â Â Â2149 Â Â Â Â Â Â539 Â Â 393
>>> 2 Â Â Â Â Â Â 434.672485 Â Â Â1606 Â Â Â Â Â Â368 Â Â 465
>>> 3 Â Â Â Â Â Â 582.795288 Â Â Â1231 Â Â Â Â Â Â307 Â Â 609
>>> 4 Â Â Â Â Â Â 820.347107 Â Â Â919 Â Â Â Â Â Â 237 Â Â 897
>>> 5 Â Â Â Â Â Â 1424.753296 Â Â 673 Â Â Â Â Â Â 194 Â Â 1473
>>> 6 Â Â Â Â Â Â 2626.403320 Â Â 466 Â Â Â Â Â Â 143 Â Â 2625
>>> 7 Â Â Â Â Â Â 4734.233887 Â Â 308 Â Â Â Â Â Â 83 Â Â Â Â Â Â Â4929
>>> 8 Â Â Â Â Â Â 4830.244141 Â Â 217 Â Â Â Â Â Â 65 Â Â Â Â Â Â Â4929
>>> 9 Â Â Â Â Â Â 4449.702637 Â Â 148 Â Â Â Â Â Â 33 Â Â Â Â Â Â Â4929
>>> 10 Â Â Â Â Â Â360.114044 Â Â Â114 Â Â Â Â Â Â 36 Â Â Â Â Â Â Â4929
>>> 11 Â Â Â Â Â Â366.000000 Â Â Â78 Â Â Â Â Â Â Â20 Â Â Â Â Â Â Â4929
>>> 12 Â Â Â Â Â Â460.655182 Â Â Â58 Â Â Â Â Â Â Â20 Â Â Â Â Â Â Â4929
>>> 13 Â Â Â Â Â Â544.184204 Â Â Â38 Â Â Â Â Â Â Â9 Â Â Â Â Â Â Â 4929
>>> 14 Â Â Â Â Â Â893.965515 Â Â Â29 Â Â Â Â Â Â Â7 Â Â Â Â Â Â Â 4929
>>> 15 Â Â Â Â Â Â1361.409058 Â Â 22 Â Â Â Â Â Â Â8 Â Â Â Â Â Â Â 4929
>>> 16 Â Â Â Â Â Â2675.285645 Â Â 14 Â Â Â Â Â Â Â2 Â Â Â Â Â Â Â 4929
>>> 17 Â Â Â Â Â Â4239.500000 Â Â 12 Â Â Â Â Â Â Â5 Â Â Â Â Â Â Â 4929
>>> 18 Â Â Â Â Â Â3198.142822 Â Â 7 Â Â Â Â Â Â Â 2 Â Â Â Â Â Â Â 4929
>>> 19 Â Â Â Â Â Â5111.799805 Â Â 5 Â Â Â Â Â Â Â 3 Â Â Â Â Â Â Â 4929
>>> 20 Â Â Â Â Â Â1403.000000 Â Â 2 Â Â Â Â Â Â Â 1 Â Â Â Â Â Â Â 4929
>>> Overall average: 1063.129883
>>>
>>> [Discussion: ] Everything appears fine until, once again, transmission
>>> 10, when the contention windows appears to 'restart' - it climbs
>>> steadily until 17. After this point, there are not enough frames to
>>> draw any conclusions.
>>>
>>> ==> iperf_33M_rate_54M_att_12dB.pcap.txt <== (lossy link, 6% delivery,
>>> 400 packets/s)
>>> Average time per TX No:
>>> TXNo ÂAvg           No       ÂFinal      ExpectedAvg
>>> 1 Â Â Â Â Â Â 360.460724 Â Â Â4482 Â Â Â Â Â Â14 Â Â Â Â Â Â Â393
>>> 2 Â Â Â Â Â Â 366.068481 Â Â Â4453 Â Â Â Â Â Â16 Â Â Â Â Â Â Â465
>>> 3 Â Â Â Â Â Â 360.871735 Â Â Â4413 Â Â Â Â Â Â13 Â Â Â Â Â Â Â609
>>> 4 Â Â Â Â Â Â 361.535553 Â Â Â4386 Â Â Â Â Â Â18 Â Â Â Â Â Â Â897
>>> 5 Â Â Â Â Â Â 367.526062 Â Â Â4357 Â Â Â Â Â Â60 Â Â Â Â Â Â Â1473
>>> 6 Â Â Â Â Â Â 360.003967 Â Â Â4283 Â Â Â Â Â Â3839 Â Â2625
>>> 7 Â Â Â Â Â Â 361.778046 Â Â Â419 Â Â Â Â Â Â 416 Â Â 4929
>>> Overall average: 362.732910
>>>
>>> [Discussion:] This exhibits the same problem as the extremely lossy
>>> 36M link - the contention window does not appear to rise. Even with
>>> enough frames to draw a good conclusion at transmission 6, the
>>> transmission time average (360) is way below the expected average
>>> (2625).
>>> ==> END OF OUTPUT <==
>>>
>>> The question here is: why does ath5k/mac80211 send out so many
>>> transmissions, and why does it vary so much based on link quality?
>>> Additionally, why does it appear to 'reset' the contention window
>>> after 9 retransmissions of a frame?
>>>
>>> Cheers,
>>>
>>> Jonathan
>>
>> Hi Jonathan!
>>
>> This is a very interesting setup and test. I guess nobody has looked so
>> closely yet... I think this is not necessarily ath5k related, but may be a bug
>> of mac80211 or minstrel, but not sure yet, of course...
>>
>> It's normal, that the CW is reset after the retry limits are reached, this is
>> what the standard says:
>>
>> "The CW shall be reset to aCWmin after every successful attempt to transmit an
>> MPDU or MMPDU, when SLRC reaches dot11LongRetryLimit, or when SSRC reaches
>> dot11ShortRetryLimit." (802.11-2007 p261)
>>
>> But it seems weird that there are so many retransmissions. The default maximum
>> numbers of retransmissions should be 7 for short frames and 4 for long frames
>> (dot11[Short|Long]RetryLimit), and this is what is set as defaults in mac80211
>> (local->hw.conf.short_frame_max_tx_count). Seems we are getting many
>> retransmissions from minstel, i added some debug prints:
>>
>
> When ath5k doesn't get retry limits from above it uses the following
> defaults on dcu.
> For now i don't think we use local->hw.conf.short_frame_max_tx_count
> for that so the
> default is ah_limit_tx_retries (AR5K_INIT_TX_RETRY) but seems it's
> wrong and we should
> fix it...
>
> /* Tx retry limits */
> #define AR5K_INIT_SH_RETRY Â Â Â Â Â Â Â Â Â Â Â10
> #define AR5K_INIT_LG_RETRY Â Â Â Â Â Â Â Â Â Â ÂAR5K_INIT_SH_RETRY
> /* For station mode */
> #define AR5K_INIT_SSH_RETRY Â Â Â Â Â Â Â Â Â Â 32
> #define AR5K_INIT_SLG_RETRY Â Â Â Â Â Â Â Â Â Â AR5K_INIT_SSH_RETRY
> #define AR5K_INIT_TX_RETRY Â Â Â Â Â Â Â Â Â Â Â10
>
>> *** txdesc tries 3
>> *** mrr 0 tries 3 rate 11
>> *** mrr 1 tries 3 rate 11
>> *** mrr 2 tries 3 rate 11
>>
>> This seems to be the normal case and that would already result in 12
>> transmissions.
>>
>> Another thing that strikes me here is: why use multi rate retries if the rate
>> is all the same? (Ignore the actual value of the rate, this is the HW rate
>> code).
>>
>> Other examples:
>>
>> *** txdesc tries 2
>> *** mrr 0 tries 9 rate 12
>> *** mrr 1 tries 2 rate 13
>> *** mrr 2 tries 3 rate 11
>>
>> = 16 transmissions in sum.
>>
>> *** txdesc tries 9
>> *** mrr 0 tries 3 rate 11
>> *** mrr 1 tries 9 rate 8
>> *** mrr 2 tries 3 rate 11
>>
>> = 24 transmissions in sum. Again, rate[1] and rate[3] are the same, so why
>> bother setting it up twice?
>>
>> bruno
>> _______________________________________________
>> ath5k-devel mailing list
>> ath5k-devel@xxxxxxxxxxxxxxx
>> https://lists.ath5k.org/mailman/listinfo/ath5k-devel
>>
>
> Also on base.c
>
> 2408 Â Â Â Â /* set up multi-rate retry capabilities */
> 2409 Â Â Â Â if (sc->ah->ah_version == AR5K_AR5212) {
> 2410 Â Â Â Â Â Â Â Â hw->max_rates = 4;
> 2411 Â Â Â Â Â Â Â Â hw->max_rate_tries = 11;
> 2412 Â Â Â Â }
>
>
>
> --
> GPG ID: 0xD21DB2DB
> As you read this post global entropy rises. Have Fun ;-)
> Nick
>

You mean sth. like the attached patch?

- Sedat -

Attachment: ath5k-Set-AR5K_INIT_TX_RETRY-and-max_rate_tries-to-3.patch
Description: plain/text


[Index of Archives]     [Linux Host AP]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Linux Kernel]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Device Mapper]
  Powered by Linux