On Thu, Sep 6, 2018 at 3:18 AM Toke Høiland-Jørgensen <toke@xxxxxxx> wrote: > > Grant Grundler <grundler@xxxxxxxxxx> writes: > > >> And, well, Grant's data is from a single test in a noisy > >> environment where the time series graph shows that throughput is all over > >> the place for the duration of the test; so it's hard to draw solid > >> conclusions from (for instance, for the 5-stream test, the average > >> throughput for 6 is 331 and 379 Mbps for the two repetitions, and for 7 > >> it's 326 and 371 Mbps) . Unfortunately I don't have the same hardware > >> used in this test, so I can't go verify it myself; so the only thing I > >> can do is grumble about it here... :) > > > > It's a fair complaint and I agree with it. My counter argument is the > > opposite is true too: most ideal benchmarks don't measure what most > > users see. While the data wgong provided are way more noisy than I > > like, my overall "confidence" in the "conclusion" I offered is still > > positive. > > Right. I guess I would just prefer a slightly more comprehensive > evaluation to base a 4x increase in buffer size on... Kalle, is this why you didn't accept this patch? Other reasons? Toke, what else would you like to see evaluated? I generally want to see three things measured when "benchmarking" technologies: throughput, latency, cpu utilization We've covered those three I think "reasonably". What does a "4x increase in memory" mean here? Wen, how much more memory does this cause ath10k to use? If a "4x increase in memory" means I'm using 1MB instead of 256KB, I'm not going worry about that on a system with 2GB-16GB of RAM if it doubles the throughput of the WIFI for a given workload. I expect routers with 128-256MB RAM would make that tradeoff as well assuming they don't have other RAM-demanding workload. cheers, grant