On Tue, Feb 26, 2019 at 03:45:19PM +0000, Russell King - ARM Linux admin wrote: > Adding more WS signals makes the bus deviate from the I2S standard, > thereby making it impossible to connect a set of standard DACs to > such a source, whereas adding more I2S data lines, you just connect > each DAC to each I2S data line and common up the bit clock and WS > signals across all. > In other words, the TDA998x approach is really the only sane way > forward. Right. You do also see some things doing this by stuffing all the left samples together in the left clock cycle and all the right samples together in the right clock cycle but that's usually more programmable hardware that also supports DSP modes and it's just falling out of the implementation with little effort rather than someone sitting down and thinking this is a good idea AFAICT. > Now, as far as transmitter support, I believe TI Davinci SoCs use > this - my Onkyo TX-NR609 AV receiver uses a DA830 SoC as a DSP to > do the surround decode, which feeds multi-channel audio out to a > set of DACs over a parallel I2S bus. The "mcasp" audio driver > has multiple serialisers to cope with this - see > Documentation/devicetree/bindings/sound/davinci-mcasp-audio.txt. Samsung LSI have multi-channel I2S with multiple data lines in their v5 and later controllers as well. I can't think of any other examples upstream off the top of my head.
Attachment:
signature.asc
Description: PGP signature
_______________________________________________ Alsa-devel mailing list Alsa-devel@xxxxxxxxxxxxxxxx https://mailman.alsa-project.org/mailman/listinfo/alsa-devel