[PATCH v9] loopback: Initialize latency at startup and during source/sink changes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 2017-02-27 at 13:54 +0100, Georg Chini wrote:
>          case SINK_INPUT_MESSAGE_POST:
>  
> -            pa_sink_input_assert_io_context(u->sink_input);
> +            pa_memblockq_push_align(u->memblockq, chunk);
> +
> +            /* If push has not been called yet, latency adjustments in sink_input_pop_cb()
> +             * are enabled. Disable them on first push and correct the memblockq. If pop
> +             * has not been called yet, wait until the pop_cb() requests the adjustment */
> +            if (u->output_thread_info.pop_called && (!u->output_thread_info.push_called || u->output_thread_info.pop_adjust)) {
> +                pa_usec_t time_delta;
> +
> +                /* This is the source latency at the time push was called */
> +                time_delta = PA_PTR_TO_UINT(data);
> +                /* Add the time between push and post */
> +                time_delta += pa_rtclock_now() - offset;
> +                /* Add the sink latency */
> +                time_delta += pa_sink_get_latency_within_thread(u->sink_input->sink);
> +
> +                /* The source latency report includes the audio in the chunk,
> +                 * but since we already pushed the chunk to the memblockq, we need
> +                 * to subtract the chunk size from the source latency so that it
> +                 * won't be counted towards both the memblockq latency and the
> +                 * source latency.
> +                 * If the source has overrun, assume that the maximum it should have pushed is
> +                 * one full source latency. It may still be possible that the next push also
> +                 * contains too much data, then the resulting latency will be wrong. */
> +                if (pa_bytes_to_usec(chunk->length, &u->sink_input->sample_spec) > u->output_thread_info.effective_source_latency)
> +                    time_delta = PA_CLIP_SUB(time_delta, u->output_thread_info.effective_source_latency);
> +                else
> +                    time_delta = PA_CLIP_SUB(time_delta, pa_bytes_to_usec(chunk->length, &u->sink_input->sample_spec));

Using effective_source_latency is starting to make some sense to me,
but I think the comment is not easy to understand. Are you ok with it
if I modify the comment like this:

@@ -701,9 +701,18 @@ static int sink_input_process_msg_cb(pa_msgobject *obj, int code, void *data, in
                  * to subtract the chunk size from the source latency so that it
                  * won't be counted towards both the memblockq latency and the
                  * source latency.
-                 * If the source has overrun, assume that the maximum it should have pushed is
-                 * one full source latency. It may still be possible that the next push also
-                 * contains too much data, then the resulting latency will be wrong. */
+                 *
+                 * Sometimes the alsa source reports way too low latency (might
+                 * be a bug in the alsa source code). This seems to happen when
+                 * there's an overrun. As an attempt to detect overruns, we
+                 * check if the chunk size is larger than the configured source
+                 * latency. If so, we assume that the source should have pushed
+                 * a chunk whose size equals the configured latency, so we
+                 * modify time_delta only by that amount, which makes
+                 * memblockq_adjust() drop more data than it would otherwise.
+                 * This seems to work quite well, but it's possible that the
+                 * next push also contains too much data, and in that case the
+                 * resulting latency will be wrong. */
                 if (pa_bytes_to_usec(chunk->length, &u->sink_input->sample_spec) > u->output_thread_info.effective_source_latency)
                     time_delta = PA_CLIP_SUB(time_delta, u->output_thread_info.effective_source_latency);
                 else

-- 
Tanu

https://www.patreon.com/tanuk


[Index of Archives]     [Linux Audio Users]     [AMD Graphics]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux