On 03/19/2013 01:51 PM, David Henningsson wrote: > I spend some of yesterday and today investigating our different buffer > modes to see how they behave in practice, i e when PulseAudio asks for > more data and what the resulting latency will be. > > Our buffer modes are > * "adjust latency mode" / PA_STREAM_ADJUST_LATENCY, > * "traditional mode" / PA_STREAM_NOFLAGS, > * "early requests mode" / PA_STREAM_EARLY_REQUESTS. > > All tests were done on my onboard soundcard, which has 64K of maximum hardware buffer, > which translates to 371 ms in the chosen sample format. > > When reading this, allow a margin of 1-2 ms for various system latencies, including > the turn-around time of the "Stream started" notification, and also remember the > default process_msec of 20 ms. I've now done the same for recording. Here are the results. The same caveats apply (the latency being < 0 ms is because the stream actually starts before I get the STREAM_READY state callback). First test - high latency, i e everything at -1: PA_STREAM_ADJUST_LATENCY: 356.97: Reading 351.84 ms, latency 350.76 to -1.07 ms 707.58: Reading 19.68 ms, latency 349.54 to 329.85 ms 708.57: Reading 332.15 ms, latency 330.85 to -1.31 ms 1059.49: Reading 39.37 ms, latency 349.61 to 310.25 ms 1060.32: Reading 312.45 ms, latency 311.08 to -1.36 ms PA_STREAM_NOFLAGS: 6.63: Stream started 356.90: Reading 351.81 ms, latency 350.27 to -1.54 ms 707.44: Reading 19.70 ms, latency 349.00 to 329.29 ms 708.41: Reading 332.11 ms, latency 330.26 to -1.84 ms 1059.33: Reading 39.41 ms, latency 349.07 to 309.66 ms 1060.20: Reading 312.43 ms, latency 310.53 to -1.89 ms PA_STREAM_EARLY_REQUESTS: 7.48: Stream started 356.96: Reading 351.81 ms, latency 349.48 to -2.33 ms 707.51: Reading 19.70 ms, latency 348.22 to 328.52 ms 708.49: Reading 332.11 ms, latency 329.50 to -2.61 ms 1059.44: Reading 39.41 ms, latency 348.34 to 308.93 ms 1060.31: Reading 312.45 ms, latency 309.80 to -2.65 ms All modes behave the same: Every 371 - 20 ms = 351 ms we get a new block of data. The same weird split as seen with playback is seen here too. Second test - medium latency, fragsize at 200 ms: PA_STREAM_ADJUST_LATENCY: 9.46: Stream started 84.59: Reading 80.32 ms, latency 75.13 to -5.19 ms 164.68: Reading 80.34 ms, latency 74.90 to -5.44 ms 245.00: Reading 80.32 ms, latency 74.88 to -5.44 ms 325.29: Reading 80.29 ms, latency 74.85 to -5.44 ms PA_STREAM_NOFLAGS: 8.84: Stream started 356.98: Reading 200.00 ms, latency 348.14 to 148.14 ms 357.21: Reading 151.81 ms, latency 148.37 to -3.44 ms 707.54: Reading 19.70 ms, latency 346.89 to 327.18 ms 708.54: Reading 200.00 ms, latency 328.18 to 128.18 ms 708.69: Reading 132.13 ms, latency 128.33 to -3.80 ms PA_STREAM_EARLY_REQUESTS: 7.46: Stream started 184.95: Reading 180.32 ms, latency 177.49 to -2.82 ms 365.04: Reading 180.34 ms, latency 177.26 to -3.08 ms 544.88: Reading 10.86 ms, latency 176.76 to 165.90 ms 545.25: Reading 169.43 ms, latency 166.27 to -3.16 ms 725.63: Reading 180.29 ms, latency 177.22 to -3.07 ms Here the results are a bit surprising. With PA_STREAM_ADJUST_LATENCY we get a new block every 200/2 - 20 ms, with PA_STREAM_NOFLAGS we get a new block every 371 - 20 ms, and with PA_STREAM_EARLY_REQUESTS we get a new block every 200 - 20 ms. For PA_STREAM_ADJUST_LATENCY there is some idea of spending half of the time in the hw buffer and half the time in the client buffer, but IMO the reasoning is a bit inaccurate: the data is sent to the client as soon as a block is available. Does PulseAudio expect the client to be sporadic in when it checks for new packages? PA_STREAM_NOFLAGS does not adjust the latency; so you just get a chunk of blocks of 200 ms each every 351 ms. One can wonder, what's the point in supplying a fragsize at all in this case? PA_STREAM_EARLY_REQUESTS seems to be the least surprising one, with 200 - 20 ms per fragment. Third test - low latency, fragsize at 10 ms. I don't think it makes sense to set maxlength, as there is no auto-adjustment of fragsize like there is of tlength. PA_STREAM_ADJUST_LATENCY: 8.56: Stream started 8.94: Reading 3.04 ms, latency 0.38 to -2.66 ms 9.15: Reading 1.47 ms, latency -2.45 to -3.92 ms 10.96: Reading 2.79 ms, latency -2.12 to -4.91 ms 13.74: Reading 2.79 ms, latency -2.12 to -4.91 ms PA_STREAM_NOFLAGS: 8.18: Stream started 357.00: Reading 10.00 ms, latency 348.82 to 338.82 ms 357.19: Reading 10.00 ms, latency 339.01 to 329.01 ms 357.31: Reading 10.00 ms, latency 329.13 to 319.13 ms 357.41: Reading 10.00 ms, latency 319.23 to 309.23 ms 357.52: Reading 10.00 ms, latency 309.34 to 299.34 ms PA_STREAM_EARLY_REQUESTS: 8.41: Stream started 8.77: Reading 2.52 ms, latency 0.36 to -2.16 ms 11.58: Reading 5.31 ms, latency 0.65 to -4.65 ms 16.82: Reading 5.26 ms, latency 0.59 to -4.67 ms 22.05: Reading 5.26 ms, latency 0.56 to -4.70 ms 27.34: Reading 5.24 ms, latency 0.58 to -4.66 ms Results here are similar to those of the medium latency scenario: PA_STREAM_ADJUST_LATENCY still provides much smaller packages than one would expect, PA_STREAM_NOFLAGS seems almost ridiculous spewing out 35 packets of 10 ms each at the same time, and PA_STREAM_EARLY_REQUESTS this time provides 5 ms packages. Before looking more deeply into why things are the way they are, what is your opinion about these numbers? I'm hesitant to change something that people might depend on, but it would be good if we had a mode that actually sent a fragsize ms packet every fragsize ms. I would spontaneously say that the PA_STREAM_NOFLAGS mode could/should be modified to do this. -- David Henningsson, Canonical Ltd. https://launchpad.net/~diwic