Hi! On Fri, Aug 24, 2018 at 05:51:37PM +0300, Georgi Djakov wrote: > Hi Maxime, > > On 08/20/2018 06:32 PM, Maxime Ripard wrote: > > Hi Georgi, > > > > On Tue, Aug 07, 2018 at 05:54:38PM +0300, Georgi Djakov wrote: > >>> There is also a patch series from Maxime Ripard that's addressing the > >>> same general area. See "dt-bindings: Add a dma-parent property". We > >>> don't need multiple ways to address describing the device to memory > >>> paths, so you all had better work out a common solution. > >> > >> Looks like this fits exactly into the interconnect API concept. I see > >> MBUS as interconnect provider and display/camera as consumers, that > >> report their bandwidth needs. I am also planning to add support for > >> priority. > > > > Thanks for working on this. After looking at your serie, the one thing > > I'm a bit uncertain about (and the most important one to us) is how we > > would be able to tell through which interconnect the DMA are done. > > > > This is important to us since our topology is actually quite simple as > > you've seen, but the RAM is not mapped on that bus and on the CPU's, > > so we need to apply an offset to each buffer being DMA'd. > > Ok, i see - your problem is not about bandwidth scaling but about using > different memory ranges by the driver to access the same location. Well, it turns out that the problem we are bitten by at the moment is the memory range one, but the controller it goes through also provides bandwidth scaling, priorities and so on, so it's not too far off. > So this is not really the same and your problem is different. Also > the interconnect bindings are describing a path and > endpoints. However i am open to any ideas. It's describing a path and endpoints, but it can describe multiple of them for the same device, right? If so, we'd need to provide additional information to distinguish which path is used for DMA. Maxime -- Maxime Ripard, Bootlin Embedded Linux and Kernel engineering https://bootlin.com
Attachment:
signature.asc
Description: PGP signature