Re: Improving status timestamp accuracy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Pierre,

Thanks for your continued engagement on this thread.

On 01/08/16 22:56, Pierre-Louis Bossart wrote:
On 7/20/16 1:59 AM, Alan Young wrote:

Yes, that could be true - there could be some jitter -  but I think it
will still result in more accurate results. Note that the adjustment to
the reported audio_tstamp will only occur for the
AUDIO_TSTAMP_TYPE_DEFAULT case and when the platform has not updated the
(hw_ptr) position outside of the interrupt callback independent of
whether the BATCH flag is used.

There is actually an argument for being less restrictive. Hardware
platform updates to position, where they happen outside of an interrupt,
may (generally will) be less accurate than the update mechanism that I
propose because such position updates are mostly restricted to the level
of DMA residue granularity, which is relatively coarse (usually).

I am not hot on changing a default behavior and end-up with platforms getting worse results and some getting better.

I am not sure that any platforms would get worse results (notwithstanding the jitter point above). Some would get better results.

It'd really be better if you used a new timestamp (I added the LINK_ESTIMATED_ATIME that isn't used by anyone and could be reclaimed) and modified the delay estimation in your own driver rather than in the core.


Well, I'm not looking at a single driver here. I am looking at several that use large parts of the common soc framework in various ways.

I'll look at LINK_ESTIMATED_ATIME and see if I could adopt that. I'm not sure how much it will help with the delay calculation but I suspect that the right answer could be deduced.

Note: For my application, I only actually care about the changes
implemented using update_delay(). The refinement to
update_audio_tstamp() just seemed to me to be part of the same issue. If
the update_audio_tstamp() change is considered too controversial then
I'd be happy to drop it.

if you change the delay by default then it changes the audio timestamp as well, not sure how you can isolate the two parts.


It only changes the audio timestamp if the user requests that the delay be included in it.


Stepping back for a moment, the delay calculation essentially consists of two parts:

1. How much data is still in the ring buffer.
2. How much data has been removed from the ring buffer but not yet
   played out.

In many respects it is artificial to separate these but that is what the API does.

In some cases the first factor is dominant, because DMA is consuming the buffer and one has - at the very best - only coarse-grained data about the position at any moment. It is unlikely ever to be sample-position accurate and, for most platforms, is much poorer.

In some cases the second factor is dominant because some data has been taken from the ring buffer and is then in some other significant size buffer. USB is a good example and, in that case, one can see that the generic driver does indeed used an elapsed-time calculation to generate (estimate) the delay report.

The more that I think about it the more it seems to me that using a time-based estimate for position (hw_ptr), outside of an interrupt callback, will always be more accurate than that returned by substream->ops->pointer(). Perhaps the results of that call should simply be ignored outside an interrupt callback - the call not even made, so as not to pollute the estimate with changed delay data.

Alan.

_______________________________________________
Alsa-devel mailing list
Alsa-devel@xxxxxxxxxxxxxxxx
http://mailman.alsa-project.org/mailman/listinfo/alsa-devel



[Index of Archives]     [ALSA User]     [Linux Audio Users]     [Kernel Archive]     [Asterisk PBX]     [Photo Sharing]     [Linux Sound]     [Video 4 Linux]     [Gimp]     [Yosemite News]

  Powered by Linux