On Wed, 2010-09-22 at 23:20 -0500, pl bossart wrote: > > Annoyingly, no I don't have list rights... I asked a while ago, but no > > reply :( > > Oh well, I will rework the patch and repost it in a couple of days... I'm a bit confused by your discussion. It seems that you're talking about the patch 2/2, which got stuck in the moderation queue. But it got out of the queue at about 10 AM (UTC) yesterday, and I have it in my mailbox - don't you have it? I plan to review it today, but I'm not familiar with the BT modules, so it may be that I can only complain about trivial things like formatting etc. > > It was my understanding that Lennart wanted to have some way to extract > > timeing infromation from compressed codecs etc. to allow for wakeup > > times to be calculated properly. I'm not sure if the usec conversions > > need some kind of supplement for compressed formats? I suspect however > > if timing information is to be extracted successfully from these > > formats, we'd need to know which format it actually is. > > > > Your suggestion seems reasonable, but not sure it can be used without > > API breakage (e.g. the extra subtype information?). I've not really > > looked to closely so this may not be an issue at all. > > There's no real way you can extract timing information just by looking > at the data. You either need to parse the frames (what I did for the > BT work) or let the hardware report the number of samples it decoded > and rendered. In both cases, you could find out what the average bit > rate is and have an approximate idea of the relationship between the > numbers of bytes passed to pulseaudio and the duration. It would be a > bad idea though to rely on this approximated bitrate to infer timing. > The client should get the audio time as reported by this sample count, > not through inversion of an approximation that will only be correct > for constant bit rates. Instead of basing all time ports on > GET_LATENCY messages we should really have a new GET_TIME message. I fully agree, except that I don't understand what's wrong with GET_LATENCY. AFAIK, GET_LATENCY returns the time that it will take for the next sample that enters the system to get out of the speakers. How would GET_TIME differ from GET_LATENCY, and why would it be better? -- Tanu Kaskinen