> I fully agree, except that I don't understand what's wrong with > GET_LATENCY. AFAIK, GET_LATENCY returns the time that it will take for > the next sample that enters the system to get out of the speakers. How > would GET_TIME differ from GET_LATENCY, and why would it be better? GET_LATENCY finds out how many bytes are currently buffered, and once you subtract this from the number of bytes sent to PulseAudio you get the number of bytes played. Dividing by the sample size gives you the audio time. A GET_TIME message would fetch the sample count straight from the sink, no matter how much was buffered, and no matter what type of data you handle you can convert to a time value if you divide by the sampling frequency. If there's no mixing the two messages are equivalent for PCM data. Cheers, -Pierre