Re: [RFC PATCH] [media]: of: move graph helpers from drivers/media/v4l2-core to drivers/of

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Hi Russell and Tomi,

On Wednesday 12 March 2014 12:47:09 Tomi Valkeinen wrote:
> On 12/03/14 12:25, Russell King - ARM Linux wrote:
> > On Mon, Mar 10, 2014 at 02:52:53PM +0100, Laurent Pinchart wrote:
> >> In theory unidirectional links in DT are indeed enough. However, let's
> >> not forget the following.
> >> 
> >> - There's no such thing as single start points for graphs. Sure, in some
> >> simple cases the graph will have a single start point, but that's not a
> >> generic rule. For instance the camera graphs
> >> http://ideasonboard.org/media/omap3isp.ps and
> >> http://ideasonboard.org/media/eyecam.ps have two camera sensors, and thus
> >> two starting points from a data flow point of view.
> > 
> > I think we need to stop thinking of a graph linked in terms of data
> > flow - that's really not useful.
> > 
> > Consider a display subsystem.  The CRTC is the primary interface for
> > the CPU - this is the "most interesting" interface, it's the interface
> > which provides access to the picture to be displayed for the CPU.  Other
> > interfaces are secondary to that purpose - reading the I2C DDC bus for
> > the display information is all secondary to the primary purpose of
> > displaying a picture.
> > 
> > For a capture subsystem, the primary interface for the CPU is the frame
> > grabber (whether it be an already encoded frame or not.)  The sensor
> > devices are all secondary to that.
> > 
> > So, the primary software interface in each case is where the data for
> > the primary purpose is transferred.  This is the point at which these
> > graphs should commence since this is where we would normally start
> > enumeration of the secondary interfaces.
> > 
> > V4L2 even provides interfaces for this: you open the capture device,
> > which then allows you to enumerate the capture device's inputs, and
> > this in turn allows you to enumerate their properties.  You don't open
> > a particular sensor and work back up the tree.

Please note that this has partly changed a couple of years ago with the 
introduction of the media controller framework. Userspace now opens a logical 
media device that describes the topology of the hardware, and then accesses 
individual components directly, from sensor to DMA engine.

> We do it the other way around in OMAP DSS. It's the displays the user is
> interested in, so we enumerate the displays, and if the user wants to
> enable a display, we then follow the links from the display towards the
> SoC, configuring and enabling the components on the way.

The logical view of a device from a CPU perspective evolves over time, as APIs 
are refactored or created to support new hardware that comes with new 
paradigms and additional complexity. The hardware data flow direction, 
however, doesn't change. Only modeling the data flow direction in DT might be 
tempting but is probably too hasty of a conclusion though : if DT should model 
the hardware, it ends up modeling a logical view of the hardware, and is thus 
not as closed as one might believe.

In the particular case of display devices I believe that using the data flow 
direction for links (assuming we can't use bidirectional links in DT) is a 
good model. It would allow parsing the whole graph at a reasonable cost (still 
higher than with bidirectional links) while making clear how to represent 
links. Let's not forgot that with more complex devices not all components can 
be referenced directly from the CPU-side display controller.

> I don't have a strong opinion on the direction, I think both have their
> pros. In any case, that's more of a driver model thing, and I'm fine
> with linking in the DT outwards from the SoC (presuming the
> double-linking is not ok, which I still like best).
> 
> > I believe trying to do this according to the flow of data is just wrong.
> > You should always describe things from the primary device for the CPU
> > towards the peripheral devices and never the opposite direction.
> 
> In that case there's possibly the issue I mentioned in other email in
> this thread: an encoder can be used in both a display and a capture
> pipeline. Describing the links outwards from CPU means that sometimes
> the encoder's input port is pointed at, and sometimes the encoder's
> output port is pointed at.
> 
> That's possibly ok, but I think Grant was of the opinion that things
> should be explicitly described in the binding documentation: either a
> device's port must contain a 'remote-endpoint' property, or it must not,
> but no "sometimes". But maybe I took his words too literally.
> 
> Then there's also the audio example Philipp mentioned, where there is no
> clear "outward from Soc" direction for the link, as the link was
> bi-directional and between two non-SoC devices.

Even if the link was unidirectional the "outward from SoC" direction isn't 
always defined for links between non-SoC devices.

-- 
Regards,

Laurent Pinchart

Attachment: signature.asc
Description: This is a digitally signed message part.


[Index of Archives]     [Device Tree Compilter]     [Device Tree Spec]     [Linux Driver Backports]     [Video for Linux]     [Linux USB Devel]     [Linux PCI Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Yosemite Backpacking]
  Powered by Linux