On Fri, Jan 05, 2024 at 10:24:18AM +0530, Sameer Pujar wrote: > > > On 04-01-2024 22:52, Mark Brown wrote: > > On Thu, Jan 04, 2024 at 06:07:22PM +0100, Thierry Reding wrote: > > > On Tue, Dec 26, 2023 at 09:58:02PM +0530, Sameer Pujar wrote: > > > > /-----> codec1 endpoint > > > > / > > > > CPU endpoint \ > > > > \-----> codec2 endpoint > > > Can you describe the use-case? Is there a need to switch between codec1 > > > and codec2 endpoints or do they receive the same data in parallel all > > > the time? > > > Could this perhaps be described by adding multiple CPU ports with one > > > endpoint each? > > Don't know about the specific use case that Sameer is looking at but to > > me this looks like a surround sound setup where multiple stereo (or > > mono) DACs are wired in parallel, either with a TDM setup or with > > multiple data lines. There's multiple CODECs all taking input from a > > single host controller. > > Yes, it is a TDM use case where the same clock and data line is shared with > multiple CODECs. Each CODEC is expected to pickup data based on the allotted > TDM slot. > > It is possible to create multiple CPU dummy endpoints and use these in DT > binding for each CODEC. I am not sure if this is the best way right now. > There are few things to note here with dummy endpoints. First, it leads to > bit of duplication of endpoint DAIs and DAI links for these. Please note > that host controller pins are actually shared with external CODECs. So > shouldn't DT provide a way to represent this connection? Second, ASoC > provides a way to represent multiple CODECs on a single DAI link in the > driver and my concern is to understand if present binding can be extended to > represent this scenario. Third, one of the user wanted to connect 6 CODECs > and that is the maximum request I have seen so far. I can expose additional > dummy CPU DAIs keeping this maximum request in mind, but not sure if users > would like to extend it further. The concern I have is, how can we make this > easily extendible and simpler to use? > > With custom DT bindings it may be simpler to resolve this, but Tegra audio > presently relies on standard graph remote-endpoints binding. So I guess > diverging from this may not be preferable? This seems like a legitimate use-case for the graph bindings, but perhaps one that nobody has run into yet. It might be worth looking into extending the bindings to account for this. I think there are two pieces for this. On one hand we have the DTC that complains, which I think is what you were seeing. It's a bit tricky to update because it checks for bidirectionality of the endpoints, which is trivial to do with 1:1 but more complicated with 1:N relationships. I've done some prototyping but not sure if my test DT is exactly what you need. Can you send a snippet of what your DT looks like to test the DTC changes against? The other part is the DT schema which currently restricts the remote-endpoint property to be a single phandle. We would want phandle-array in this case with an updated description. Something like this: --- >8 --- diff --git a/dtschema/schemas/graph.yaml b/dtschema/schemas/graph.yaml index bca450514640..1459b88b9b77 100644 --- a/dtschema/schemas/graph.yaml +++ b/dtschema/schemas/graph.yaml @@ -42,8 +42,9 @@ $defs: remote-endpoint: description: | - phandle to an 'endpoint' subnode of a remote device node. - $ref: /schemas/types.yaml#/definitions/phandle + A list of phandles to 'endpoint' subnodes of one or more remote + device node. + $ref: /schemas/types.yaml#/definitions/phandle-array port-base: type: object --- >8 --- Thierry
Attachment:
signature.asc
Description: PGP signature