Search Linux Wireless

Re: [Make-wifi-fast] debugging TCP stalls on high-speed wifi

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I’m currently adding ACK thinning to Linux’s GRO code. Quite a simple addition given the way that code works.

Simon


> On Dec 12, 2019, at 3:42 PM, Dave Taht <dave.taht@xxxxxxxxx> wrote:
> 
> On Thu, Dec 12, 2019 at 1:12 PM Johannes Berg <johannes@xxxxxxxxxxxxxxxx> wrote:
>> 
>> Hi Eric,
>> 
>> Thanks for looking :)
>> 
>>>> I'm not sure how to do headers-only, but I guess -s100 will work.
>>>> 
>>>> https://johannes.sipsolutions.net/files/he-tcp.pcap.xz
>>>> 
>>> 
>>> Lack of GRO on receiver is probably what is killing performance,
>>> both for receiver (generating gazillions of acks) and sender
>>> (to process all these acks)
>> Yes, I'm aware of this, to some extent. And I'm not saying we should see
>> even close to 1800 Mbps like we have with UDP...
>> 
>> Mind you, the biggest thing that kills performance with many ACKs isn't
>> the load on the system - the sender system is only moderately loaded at
>> ~20-25% of a single core with TSO, and around double that without TSO.
>> The thing that kills performance is eating up all the medium time with
>> small non-aggregated packets, due to the the half-duplex nature of WiFi.
>> I know you know, but in case somebody else is reading along :-)
> 
> I'm paying attention but pay attention faster if you cc make-wifi-fast.
> 
> If you captured the air you'd probably see the sender winning the
> election for airtime 2 or more times in a row,
> it's random and oft dependent on on a variety of factors.
> 
> Most Wifi is *not* "half" duplex, which implies it ping pongs between
> send and receive.
> 
>> 
>> But unless somehow you think processing the (many) ACKs on the sender
>> will cause it to stop transmitting, or something like that, I don't
>> think I should be seeing what I described earlier: we sometimes (have
>> to?) reclaim the entire transmit queue before TCP starts pushing data
>> again. That's less than 2MB split across at least two TCP streams, I
>> don't see why we should have to get to 0 (which takes about 7ms) until
>> more packets come in from TCP?
> 
> Perhaps having a budget for ack processing within a 1ms window?
> 
>> Or put another way - if I free say 400kB worth of SKBs, what could be
>> the reason we don't see more packets be sent out of the TCP stack within
>> the few ms or so? I guess I have to correlate this somehow with the ACKs
>> so I know how much data is outstanding for ACKs. (*)
> 
> yes.
> 
> It would be interesting to repeat this test in ht20 mode, and/or using
> 
> flent --socket-stats --step-size=.04 --te=upload_streams=2 -t
> whatever_variant_of_test tcp_nup
> 
> That will capture some of the tcp stats for you.
> 
>> 
>> The sk_pacing_shift is set to 7, btw, which should give us 8ms of
>> outstanding data. For now in this setup that's enough(**), and indeed
>> bumping the limit up (setting sk_pacing_shift to say 5) doesn't change
>> anything. So I think this part we actually solved - I get basically the
>> same performance and behaviour with two streams (needed due to GBit LAN
>> on the other side) as with 20 streams.
>> 
>> 
>>> I had a plan about enabling compressing ACK as I did for SACK
>>> in commit
>>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=5d9f4262b7ea41ca9981cc790e37cca6e37c789e
>>> 
>>> But I have not done it yet.
>>> It is a pity because this would tremendously help wifi I am sure.
>> 
>> Nice :-)
>> 
>> But that is something the *receiver* would have to do.
> 
> Well it is certainly feasible to thin acks on the driver as we did in
> cake. More general. More cpu intensive. I'm happily just awaiting
> eric's work instead.
> 
> One thing comcast inadvertently does to most flows is remark them cs1,
> which tosses big data into the bk queue and acks into the be queue. It
> actually helps sometimes.
> 
>> 
>> The dirty secret here is that we're getting close to 1700 Mbps TCP with
>> Windows in place of Linux in the setup, with the same receiver on the
>> other end (which is actually a single Linux machine with two GBit
>> network connections to the AP). So if we had this I'm sure it'd increase
>> performance, but it still wouldn't explain why we're so much slower than
>> Windows :-)
>> 
>> Now, I'm certainly not saying that TCP behaviour is the only reason for
>> the difference, we already found an issue for example where due to a
>> small Windows driver bug some packet extension was always used, and the
>> AP is also buggy in that it needs the extension but didn't request it
>> ... so the two bugs cancelled each other out and things worked well, but
>> our Linux driver believed the AP ... :) Certainly there can be more
>> things like that still, I just started on the TCP side and ran into the
>> queueing behaviour that I cannot explain.
>> 
>> 
>> In any case, I'll try to dig deeper into the TCP stack to understand the
>> reason for this transmit behaviour.
>> 
>> Thanks,
>> johannes
>> 
>> 
>> (*) Hmm. Now I have another idea. Maybe we have some kind of problem
>> with the medium access configuration, and we transmit all this data
>> without the AP having a chance to send back all the ACKs? Too bad I
>> can't put an air sniffer into the setup - it's a conductive setup.
> 
> see above
>> 
>> 
>> (**) As another aside to this, the next generation HW after this will
>> have 256 frames in a block-ack, so that means instead of up to 64 (we
>> only use 63 for internal reasons) frames aggregated together we'll be
>> able to aggregate 256 (or maybe we again only 255?).
> 
> My fervent wish is to somehow be able to mark every frame we can as not
> needing a retransmit in future standards. I've lost track of what ax
> can do. ? And for block ack retries
> to give up far sooner.
> 
> you can safely drop all but the last three acks in a flow, and the
> txop itself provides
> a suitable clock.
> 
> And, ya know, releasing packets ooo doesn't hurt as much as it used
> to, with rack.
>> Each one of those
>> frames may be an A-MSDU with ~11k content though (only 8k in the setup I
>> have here right now), which means we can get a LOT of data into a single
>> PPDU ...
> 
> Just wearing my usual hat, I would prefer to optimize for service
> time, not bandwidth, in the future,
> using smaller txops with this more data in them, than the biggest
> txops possible.
> 
> If you constrain your max txop to 2ms in this test, you will see tcp
> in slow start ramp up faster,
> and the ap scale to way more devices, with way less jitter and
> retries. Most flows never get out of slowstart.
> 
>> . we'll probably have to bump the sk_pacing_shift to be able to
>> fill that with a single TCP stream, though since we run all our
>> performance numbers with many streams, maybe we should just leave it :)
> 
> Please. Optimizing for single flow performance is an academic's game.
> 
>> 
>> 
> 
> 
> -- 
> Make Music, Not War
> 
> Dave Täht
> CTO, TekLibre, LLC
> http://www.teklibre.com
> Tel: 1-831-435-0729
> _______________________________________________
> Make-wifi-fast mailing list
> Make-wifi-fast@xxxxxxxxxxxxxxxxxxxxx
> https://lists.bufferbloat.net/listinfo/make-wifi-fast




[Index of Archives]     [Linux Host AP]     [ATH6KL]     [Linux Wireless Personal Area Network]     [Linux Bluetooth]     [Wireless Regulations]     [Linux Netdev]     [Kernel Newbies]     [Linux Kernel]     [IDE]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite Hiking]     [MIPS Linux]     [ARM Linux]     [Linux RAID]

  Powered by Linux