Re: [ANNOUNCE] 2019 Linux Audio miniconference

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 10/25/19 4:45 AM, Vinod Koul wrote:
On 24-10-19, 09:31, Pierre-Louis Bossart wrote:
On 10/24/19 9:20 AM, Patrick Lai wrote:
On 10/22/19 11:59 AM, Mark Brown wrote:
Hi,


As with previous years let's pull together an agenda through a mailing
list discussion - if people could reply to this mail with any topics
they'd like to discuss we can take it from there.  Of course if we can
sort things out more quickly via the mailing list that's even better!

1. Gapless playback handling between two playbacks with different format

did you mean compressed formats?

Yes this is for compressed formats. So, we do not allow users to set
parameters for gapless playback as the assumption is that we are in
album with same parameters for the subsequent tracks to be played.

But in case of some codecs like wma (and flac iirc) there can be changes
to next track using different wma formats which warrants change in
params.

Question is if this should be allowed and if so we need to propose a
change in API for this

Gapless playback between compressed tracks needs to take the filter history/delay into account. The encoding/decoding process results in zero-values samples at the beginning of the file, and you need to flush the output likewise if you want all the samples. IIRC there is an I3D tag that tells you how many samples need to be skipped when you decode a new track, but I don't think this would work if the preceding track has a different format and hence filter delay/history.

E.g. if you assume the preceding track has a 256 sample filter delay, and if you switch to a track where the I3D tag tells you to remove the first 1024 samples, clearly something will go wrong. You'd need extra work in your firmware you have the two decoders co-exist for some time while you flush the previous history buffer and stitch it with the new samples. This leads to extra issues in terms of DSP resources.

And now that I think of it, I don't think we have a means to instantiate a second decoder on the same stream while it's running, you'd likely need two streams and a fancy stitching in your firmware.

And if you want this to work in Android, you'd also need extra streams in the HAL, and likely some changes in the media parts.


2. Passing timestamp along with buffer for both playback and capture
3. PCM device volume control
4. Unified audio graph building across multiple subsystems

Thanks
Patrick


_______________________________________________
Alsa-devel mailing list
Alsa-devel@xxxxxxxxxxxxxxxx
https://mailman.alsa-project.org/mailman/listinfo/alsa-devel

_______________________________________________
Alsa-devel mailing list
Alsa-devel@xxxxxxxxxxxxxxxx
https://mailman.alsa-project.org/mailman/listinfo/alsa-devel




[Index of Archives]     [ALSA User]     [Linux Audio Users]     [Pulse Audio]     [Kernel Archive]     [Asterisk PBX]     [Photo Sharing]     [Linux Sound]     [Video 4 Linux]     [Gimp]     [Yosemite News]

  Powered by Linux