On Wed, 2015-02-25 at 19:43 +0100, Georg Chini wrote: > Now takes the difference of the timestamps when source and sink latency > were captured into account. > > For now, for logging purposes only. > > Even though the timestamp difference is correctly taken into account for > the case when the nominal and actual rates match, care is taken to > minimize this difference. For this purpose, the order of snapshotting > the sink and the source is swapped, because the sink snapshot usually > takes longer then the source shapshot. > > Note: for both webcam -> HDA and bluetooth a2dp -> HDA scenarios with > 200 ms latency, the old and the new method give results that agree within > 0.5 ms. So, while theoretically an improvement, on my hardware, the patch > is effectively a no-op for such latencies. > There are however situations where the delay between sink and source > snapshot becomes significant. > --- >  src/modules/module-loopback.c | 36 +++++++++++++++++++++++++----------- >  1 file changed, 25 insertions(+), 11 deletions(-) Sorry for the long review delay. I hope you still remember enough of your thinking process so that you can answer my questions. > diff --git a/src/modules/module-loopback.c b/src/modules/module-loopback.c > index 1b34657..3532728 100644 > --- a/src/modules/module-loopback.c > +++ b/src/modules/module-loopback.c > @@ -173,38 +173,52 @@ static void teardown(struct userdata *u) { >  static void adjust_rates(struct userdata *u) { >      size_t buffer, fs; >      uint32_t old_rate, base_rate, new_rate; > -    pa_usec_t buffer_latency; > +    pa_usec_t final_latency, current_buffer_latency, current_latency, corrected_latency; > +    int32_t latency_difference; > +    pa_usec_t snapshot_delay; >  >      pa_assert(u); >      pa_assert_ctl_context(); >  > -    pa_asyncmsgq_send(u->source_output->source->asyncmsgq, PA_MSGOBJECT(u->source_output), SOURCE_OUTPUT_MESSAGE_LATENCY_SNAPSHOT, NULL, 0, NULL); >      pa_asyncmsgq_send(u->sink_input->sink->asyncmsgq, PA_MSGOBJECT(u->sink_input), SINK_INPUT_MESSAGE_LATENCY_SNAPSHOT, NULL, 0, NULL); > +    pa_asyncmsgq_send(u->source_output->source->asyncmsgq, PA_MSGOBJECT(u->source_output), SOURCE_OUTPUT_MESSAGE_LATENCY_SNAPSHOT, NULL, 0, NULL); >  > -    buffer = > -        u->latency_snapshot.sink_input_buffer + > -        u->latency_snapshot.source_output_buffer; > +    /* Rates and latencies*/ > +    old_rate = u->sink_input->sample_spec.rate; > +    base_rate = u->source_output->sample_spec.rate; >  > +    buffer = u->latency_snapshot.sink_input_buffer + u->latency_snapshot.source_output_buffer; >      if (u->latency_snapshot.recv_counter <= u->latency_snapshot.send_counter) >          buffer += (size_t) (u->latency_snapshot.send_counter - u->latency_snapshot.recv_counter); >      else >          buffer = PA_CLIP_SUB(buffer, (size_t) (u->latency_snapshot.recv_counter - u->latency_snapshot.send_counter)); >  > -    buffer_latency = pa_bytes_to_usec(buffer, &u->sink_input->sample_spec); > +    current_buffer_latency = pa_bytes_to_usec(buffer, &u->sink_input->sample_spec); > +    snapshot_delay = u->latency_snapshot.source_timestamp - u->latency_snapshot.sink_timestamp; > +    current_latency = u->latency_snapshot.sink_latency + current_buffer_latency + base_rate * u->latency_snapshot.source_latency / old_rate - snapshot_delay; >  > -    pa_log_debug("Loopback overall latency is %0.2f ms + %0.2f ms + %0.2f ms = %0.2f ms", > +    final_latency = u->latency; > + > +    /* Latency and latency difference at base rate */ > +    corrected_latency = u->latency_snapshot.source_latency + (u->latency_snapshot.sink_latency + current_buffer_latency) * old_rate / base_rate - snapshot_delay; I think there should be a comment explaining why this "correct latency" is more correct than the incorrect latency... and what is the incorrect latency? Is current_latency it? > +    latency_difference = (int32_t)(corrected_latency - final_latency); > + > +    pa_log_debug("Loopback overall latency is %0.2f ms + %0.2f ms + %0.2f ms = %0.2f ms (at the base rate: %0.2f ms, old estimate: %0.2f ms)", The log message could be made easier to understand. I propose the following format: Loopback latency:   source: %0.2f ms   buffers: %0.2f ms   sink: %0.2f ms   sum: %0.2f ms   sum (adjusted for measurement delay): %0.2f ms   sum (at the base rate): %0.2f ms   sum (old estimate): %0.2f ms Just one sum value might be enough, though. Why is the latency calculated separately at the base rate? In theory the sum should be independent of which rate domain is chosen, right? Does the old estimate mean the estimate that the code did before your patches? If so, that seems like something that you only use during development of these patches, and shouldn't be included in the final code. >                  (double) u->latency_snapshot.sink_latency / PA_USEC_PER_MSEC, > -                (double) buffer_latency / PA_USEC_PER_MSEC, > +                (double) current_buffer_latency / PA_USEC_PER_MSEC, >                  (double) u->latency_snapshot.source_latency / PA_USEC_PER_MSEC, > -                ((double) u->latency_snapshot.sink_latency + buffer_latency + u->latency_snapshot.source_latency) / PA_USEC_PER_MSEC); > +                (double) current_latency / PA_USEC_PER_MSEC, > +                (double) corrected_latency / PA_USEC_PER_MSEC, > +                ((double) u->latency_snapshot.sink_latency + current_buffer_latency + u->latency_snapshot.source_latency) / PA_USEC_PER_MSEC); > +    pa_log_debug("Latency difference: %0.2f ms, rate difference: %i Hz", This message could be improved too. If the log reader is not familiar with the code, it's not obvious what differences these are. > +                (double) latency_difference / PA_USEC_PER_MSEC, > +                (int32_t)(old_rate - base_rate)); >  >      pa_log_debug("Should buffer %zu bytes, buffered at minimum %zu bytes", >                  u->latency_snapshot.max_request*2, >                  u->latency_snapshot.min_memblockq_length); >  >      fs = pa_frame_size(&u->sink_input->sample_spec); > -    old_rate = u->sink_input->sample_spec.rate; > -    base_rate = u->source_output->sample_spec.rate; >  >      if (u->latency_snapshot.min_memblockq_length < u->latency_snapshot.max_request*2) >          new_rate = base_rate - (((u->latency_snapshot.max_request*2 - u->latency_snapshot.min_memblockq_length) / fs) *PA_USEC_PER_SEC)/u->adjust_time; -- Tanu