On Wed, 29 Mar 2017, at 12:35 AM, Georg Chini wrote: > On 28.03.2017 12:02, Arun Raghavan wrote: > > > > On Mon, 27 Mar 2017, at 01:57 AM, Georg Chini wrote: > >> On 20.03.2017 08:25, Arun Raghavan wrote: > >>> On Thu, 9 Mar 2017, at 09:53 PM, Georg Chini wrote: > >>>> On 09.03.2017 17:14, Georg Chini wrote: > >>>>> On 09.03.2017 16:33, Arun Raghavan wrote: > >>>>>> On Thu, 9 Mar 2017, at 03:27 PM, Georg Chini wrote: > >>>>>>> On 09.03.2017 05:37, Arun Raghavan wrote: > >>>>>>>> We don't always know whether the in-flight memory chunks will be > >>>>>>>> rendered or skipped (if the source is not in RUNNING). This can > >>>>>>>> cause us > >>>>>>>> to have an erroneous estimate of drift, particularly when the > >>>>>>>> canceller > >>>>>>>> starts. > >>>>>>>> > >>>>>>>> To avoid this, we explicitly flush out the send and receive sides > >>>>>>>> of the > >>>>>>>> message queue of audio chunks going from the sink to the source before > >>>>>>>> trying to perform a resync. > >>>>>>>> --- > >>>>>>>> src/modules/echo-cancel/module-echo-cancel.c | 7 ++++++- > >>>>>>>> 1 file changed, 6 insertions(+), 1 deletion(-) > >>>>>>>> > >>>>>>>> diff --git a/src/modules/echo-cancel/module-echo-cancel.c > >>>>>>>> b/src/modules/echo-cancel/module-echo-cancel.c > >>>>>>>> index dfd05b6..ed75e0c 100644 > >>>>>>>> --- a/src/modules/echo-cancel/module-echo-cancel.c > >>>>>>>> +++ b/src/modules/echo-cancel/module-echo-cancel.c > >>>>>>>> @@ -683,8 +683,13 @@ static void do_resync(struct userdata *u) { > >>>>>>>> pa_log("Doing resync"); > >>>>>>>> /* update our snapshot */ > >>>>>>>> - source_output_snapshot_within_thread(u, &latency_snapshot); > >>>>>>>> + /* 1. Get sink input latency snapshot, might cause buffers to > >>>>>>>> be sent to source thread */ > >>>>>>>> pa_asyncmsgq_send(u->sink_input->sink->asyncmsgq, > >>>>>>>> PA_MSGOBJECT(u->sink_input), SINK_INPUT_MESSAGE_LATENCY_SNAPSHOT, > >>>>>>>> &latency_snapshot, 0, NULL); > >>>>>>>> + /* 2. Pick up any in-flight buffers (and discard if needed) */ > >>>>>>>> + while (pa_asyncmsgq_process_one(u->asyncmsgq)) > >>>>>>>> + ; > >>>>>>>> + /* 3. Now get the source output latency snapshot */ > >>>>>>>> + source_output_snapshot_within_thread(u, &latency_snapshot); > >>>>>>>> /* calculate drift between capture and playback */ > >>>>>>>> diff_time = calc_diff(u, &latency_snapshot); > >>>>>>> While taking a look at the patch I noticed something else in the > >>>>>>> do_resync(). > >>>>>>> You are doing: > >>>>>>> > >>>>>>> source_output_snapshot_within_thread(u, &latency_snapshot); > >>>>>>> pa_asyncmsgq_send(u->sink_input->sink->asyncmsgq, > >>>>>>> PA_MSGOBJECT(u->sink_input), SINK_INPUT_MESSAGE_LATENCY_SNAPSHOT, > >>>>>>> &latency_snapshot, 0, NULL); > >>>>>>> > >>>>>>> from the source thread. I tried something similar in module loopback > >>>>>>> and > >>>>>>> found that you should not send a message to the sink thread from there. > >>>>>>> > >>>>>>> At least for bluetooth it looks like input and output is done in the > >>>>>>> same thread, > >>>>>>> so the pa_asyncmsg_send() will hang. I tested it with my headset and it > >>>>>>> hangs > >>>>>>> indeed. > >>>>>> Interesting. Do you mean for HSP, or HFP, or ...? > >>>>>> > >>>>>> -- Arun > >>>>> HSP, native backend. As I said, I had the same problem in > >>>>> module-loopback. > >>>> This was the command I issued: > >>>> > >>>> pacmd load-module module-echo-cancel source_name=echo_cancel_source > >>>> sink_name=echo_cancel_sink aec_method=webrtc use_volume_sharing=false > >>>> adjust_time=2 sink_master=bluez_sink.0C_E0_E4_31_23_2D.headset_head_unit > >>>> source_master=bluez_source.0C_E0_E4_31_23_2D.headset_head_unit > >>>> > >>>> After that you had to kill pulseaudio with kill -9. > >>> So you're right of course. However, the problem itself predates the > >>> patch, so I'd like to decouple the review of this with fixing the issues > >>> of sink and source being on the same thread. > >>> > >>> I'm not sure what we should be doing there. Maybe a check to compare > >>> asyncmsgq of the sink input with pa_thread_mq_get(), and if they're the > >>> same, call an _in_thread() version of get sink latency snapshot. > >>> > >>> -- Arun > >> Hi, > >> > >> sorry for the late reply, I have been extremely busy the last two weeks. > >> Can't you move the while (pa_asyncmsgq_process_one(u->asyncmsgq)) > >> into source_output_snapshot_within_thread() and call do_resync() from > >> the main thread? > > This might not be reliable enough -- you'll notice request resync gets > > set as soon as there is a sink underrun, and we want to make sure it's > > processed before the next run of the canceller. > > > > -- Arun > > OK, since you want to decouple the bug from the review, > go ahead and apply your patch. Thanks. > Comparing sink->asyncmsgq to source->asyncmsgq and using the > in_thread version of the snapshot if they are equal might be a > workaround for the bug. I'll play around with this, and also try to see if there's something better. > I took a look at what you are doing to synchronize the streams and > I don't really understand it. You are doing different adjustments in > several places and I believe this is not a good idea. Re-sampling should > be exact enough if you apply the logic I am using in module-loopback. > Then you would only need some special handling for the underrun case > and even that would not be necessary if it is acceptable that the streams > are out-of-sync for a few seconds. > > But maybe I just don't understand the code well enough. So the resync code does two things: 1. Try to make sure we are able to correlate playback samples with the corresponding capture samples (so the echo canceller ideally gets the playback sample, and the corresponding sample in capture at the same position) 2. A poor-man's drift compensation if needed (cancellation between laptop speaker and USB mic, for example) For the the latter, doing something more sophisticated would be nice indeed, and maybe this translates to a more robust correlation too. This is not used much right now (and can be offloaded to the echo canceller completely in the webrtc case), so it's not a priority imo. Cheers, Arun