Thanks for this detailed set of explanations. > So, basically: if you configure a latency then you should get > something in the area what you asked for but we cannot make > guarantees. And the connection between latency and block sizes is even > fuzzier. D'oh. Looks like if I interface with a timing-critical protocol I'd need to hide this inside a module... >> 2. I assumed that the memblockq routines (push, peek and drop) are >> thread-safe, is this a valid assumption? > > Nope. You may not assume that. > > Only very few functions in PA are thread-safe. This has various > reasons: speed, simplicity, fear of deadlock hell, but most > importantly that we try to minimize locking. The goal is to do things > entirely lock-free. > > To fix this I'd suggest allocating a pa_asyncmsgq object for sending > over the memblocks from the source thread to the sink thread. You can > send arbitrary data with that including memchunks. It's thread-safe > (and lock-free). Then, on the receiver side push the data into a > pa_memblockq for flexible buffering. ok. makes sense. >> 3. For now the source and sink are synchronous but if they are not, >> how can I enable a sample-rate converter to correct for clock drifts? >> I see some code for SRC in both the input and output IO threads, >> however I don't understand how the tracking would be done. > > module-combine handles this already. It probably would make sense to > copy the basic logic here: in the main thread simply measure the > latency of the sink and source every now and then, and then update the > sampling rate of the sink input with pa_sink_input_set_rate(). > > (This is actually quite hard to get right, and module-combine doesn't > entirely get it right. The problem is getting a somewhat atomic > snapshot of both latencies, since in the time between asking the two > latencies another memblock might have been sent over.) Humm, I didn't realize your definition of latency is different from my intuitive definition. I thought in terms of samples, but when I checked the code in alsa-sink.c, I saw that the latency is really the delta between the wall clock and the audio clock+the delayed samples. What this means is that if there's a drift between the wall clock and the audio clock the latency reported will gradually increase or be reduced Is this correct? I guess when you substract both latencies you get rid of the wall clock component, which is fine in this case. And yes we would need to low-pass filter the deviation to focus only on the long-term evolution. The clocks shouldn't be more that 1% apart anyway on most systems. > In the rewind callback you you simply must rewind the read pointer in > the memblockq. It is called whenever we need to rewrite the hardware > playback buffer. i.e. let's say we have 2s of buffer. Now a new stream > is added to the mix. We need to remix the whole 2s we already > wrote. Then we rewind each stream and ask for the data again and write > it to the buffer. > > If you use a memblockq all you need to do is basically forward this > call to pa_memblockq_rewind() which does the heavy lifting for you. This part is unclear. What you are saying is that basically pa_memblockq_rewind() is the opposite of the drop(), this is just a play with the read pointer. However when does the data actually get marked as used by the sink and when can these memory blocks be reclaimed/reused? > Whether you need to implement a state-changed cb depends. module-sine > uses it to trigger a rewind when the stream is created because it has > PCM data ready right-away. So it listens for the > PA_SINK_INPUT_INIT->PA_SINK_INPUT_RUNNING state change and requests > the rewind right away. In other modules however PCM data might not be > readily available, i.e. because it needs to be received first from a > client. In that case you probably don't want to rewind right-away on > that state change but instead wait until you actually got enough PCM > data and only then request the rewind. Your case is the latter I > guess. ok, makes sense. > Hope this helps! Yes it did! Thanks. - Pierre