Writing documentation for simple device tree ASoC cards

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello!

I've spent way too much time digging through emails, documentation and
kernel source code to form a mental model that lets me understand and
use simple-sound-card, audio-graph-card and audio-graph-card2.

There's really no introductory documentation geared towards people who
are writing device trees intending to glue together multiple audio
components in to a single sound card. I'm deciding I'll try and take up
this task to provide some basic documentation for this.

Before starting I would like to quickly go over some notes in case I'm
actually just incredibly wrong. I'd rather find that out now than later.

As a general ASoC refresher:

- ALSA provides PCM interfaces and controls to a sound card
- ASoC introduces the concept of components
- Components are what you instantiate with the device tree
- Components can be asked to set a clock or create a PCM stream
- Components have one or more DAIs
- DAIs represent capabilities of a stream
- Components have DAPM widgets for power management
- These can be related to physical pins on the board or DAI streams
- CPU components are ones that ALSA can use directly to process data
- Codec components provide information about DAIs or drive streams

Most of this is already explained in the documentation.

The generic sound cards you can create using simple-sound-card and
audio-graph-card are machine drivers. Their jobs are to ensure that
when using a DAI all the related components, clocking and power
management is set up and ready for use.

This is done by:

- Linking multiple DAIs together
- Connecting DAPM widgets together to power on components
- Calculate and setting the shared clock (usually mclk)
- Setting DAI properties (like TDM slots and width)
- Adding widgets for jack detection

DAI linking isn't explained in the documentation but was a core part
of my confusion when trying to use these cards: I didn't understand if
it represented some relationship between physical pins, PCM channels,
etc.
My fuzzy model of a DAI link is that it exists to group DAIs (and thus
components) necessary to acheive a desired data flow, where that flow is
left up to the components themselves to implement.

There's a lot of examples for actually using the simple family of cards
but I imagine documenting the above would help people troubleshoot or
describe their problems better.

Other minor notes:

- sound-name-prefix is very useful and fits in somewhere here
- audio-graph-cards don't support GPIO jacks from what I can see

I also have noticed that with audio-graph-card you can't use a DAI
multiple times in different cards. This makes sense but is this due to
the way graphs and endpoints work or due to the underlying sound system?

This problem and jacks seem to point towards maybe allowing specification
of DAPM widgets in the device tree, and maybe kcontrols for selecting
which codec to use? I'm not sure.

It would also be useful to understand the scope of these simple cards.
Complex audio needs like mine seem to be outside the scope at the moment
and require writing a custom card driver.
Perhaps it's worth taking that as an opportunity to write a driver using
the simple card framework as an example?

John.



[Index of Archives]     [ALSA User]     [Linux Audio Users]     [Pulse Audio]     [Kernel Archive]     [Asterisk PBX]     [Photo Sharing]     [Linux Sound]     [Video 4 Linux]     [Gimp]     [Yosemite News]

  Powered by Linux