Re: [PATCH V7 11/12] Documentation: bridge: Add documentation for ps8622 DT properties

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Sep 22, 2014 at 05:42:41PM +0300, Tomi Valkeinen wrote:
> On 22/09/14 11:26, Thierry Reding wrote:
> > On Fri, Sep 19, 2014 at 05:28:37PM +0300, Tomi Valkeinen wrote:
> >> On 19/09/14 16:59, Ajay kumar wrote:
> >>
> >>> I am not really able to understand, what's stopping us from using this
> >>> bridge on a board with "complex" display connections. To use ps8622 driver,
> >>> one needs to "attach" it to the DRM framework. For this, the DRM driver
> >>
> >> Remember that when we talk about DT bindings, there's no such thing as
> >> DRM. We talk about hardware. The same bindings need to work on any
> >> operating system.
> >>
> >>> would need the DT node for ps8622 bridge. For which I use a phandle.
> >>
> >> A complex one could be for example a case where you have two different
> >> panels connected to ps8622, and you can switch between the two panels
> >> with, say, a gpio. How do you present that with a simple phandle?
> > 
> > How do you represent that with a graph? Whether you describe it using a
> > graph or a simple phandle you'll need additional nodes somewhere in
> > between. Perhaps something like this:
> > 
> > 	mux: ... {
> > 		compatible = "gpio-mux-bridge";
> > 
> > 		gpio = <&gpio 42 GPIO_ACTIVE_HIGH>;
> > 
> > 		panel@0 {
> > 			panel = <&panel0>;
> > 		};
> > 
> > 		panel@1 {
> > 			panel = <&panel1>;
> > 		};
> > 	};
> > 
> > 	ps8622: ... {
> > 		bridge = <&mux>;
> > 	};
> > 
> > 	panel0: ... {
> > 		...
> > 	};
> > 
> > 	panel1: ... {
> > 		...
> > 	};
> 
> Yes, it's true we need a mux there. But we still have the complication
> that for panel0 we may need different ps8622 settings than for panel1.

Yes, and that's why the bridge should be querying the panel for the
information it needs to determine the settings.

> If that's the case, then I think we'd need to have two output endpoints
> in ps8622, both going to the mux, and each having the settings for the
> respective panel.

But we'd be lying in DT. It no longer describes the hardware properly.
The device only has a single input and a single output with no means to
mux anything. Hence the device tree shouldn't be faking multiple inputs
or outputs.

> >>> If some XYZ platform wishes to pick the DT node via a different method,
> >>> they are always welcome to do it. Just because I am not specifying a
> >>> video port/endpoint in the DT binding example, would it mean that platform
> >>> cannot make use of ports in future? If that is the case, I can add something
> >>
> >> All the platforms share the same bindings for ps8622. If you now specify
> >> that ps8622 bindings use a simple phandle, then anyone who uses ps8622
> >> should support that.
> >>
> >> Of course the bindings can be extended in the future. In that case the
> >> drivers need to support both the old and the new bindings, which is
> >> always a hassle.
> >>
> >> Generally speaking, I sense that we have different views of how display
> >> devices and drivers are structured. You say "If some XYZ platform wishes
> >> to pick the DT node via a different method, they are always welcome to
> >> do it.". This sounds to me that you see the connections between display
> >> devices as something handled by a platform specific driver.
> >>
> >> I, on the other hand, see connections between display devices as common
> >> properties.
> >>
> >> Say, we could have a display board, with a panel and an encoder and
> >> maybe some other components, which takes parallel RGB as input. The same
> >> display board could as well be connected to an OMAP board or to an
> >> Exynos board.
> >>
> >> I think the exact same display-board.dtsi file, which describes the
> >> devices and connections in the display board, should be usable on both
> >> OMAP and Exynos platforms. This means we need to have a common way to
> >> describe video devices, just as we have for other things.
> > 
> > A common way to describe devices in DT isn't going to give you that. The
> > device tree is completely missing any information about how to access an
> > extension board like that. The operating system defines the API by which
> > the board can be accessed. I imagine that this would work by making the
> > first component of the board a bridge of some sort and then every driver
> > that supports that type of bridge (ideally just a generic drm_bridge)
> > would also work with that display board.
> 
> I'm not sure I follow.
> 
> Obviously there needs to be board specific .dts file that brings the
> board and the display board together. So, say, the display-board.dtsi
> has a i2c touchscreen node, but the board.dts will tell which i2c bus
> it's connected to.
> 
> Well, now as I wrote that, I wonder if that's possible... A node needs
> to have a parent, and for i2c that must be the i2c master. Is that
> something the DT overlays/fragments or such are supposed to handle?
> 
> But let's only think about the video path for now. We could have an
> encoder and a panel on the board. We could describe the video path
> between the encoder and the panel in the display-board.dts as that is
> fixed. Then all that's needed in the board.dts is to connect the board's
> video output to the encoders input with the video graph. Why would that
> not work?

My point is that the video graph isn't the solution to that problem.
Having an OS abstraction for the devices involved is. DT is only the
means to connect those devices.

> Sure, there's more that's needed. Common encoder and panel drivers for
> one. But it all starts with a common way to describe the video devices
> and the connections in the DT. If we don't have that, we don't have
> anything.

I don't think we need to have a common way to describe video devices. In
my opinion DT bindings are much better if they are specifically tailored
towards the device that they describe. We'll provide a driver for that
device anyway, so we should be creating appropriate abstractions at the
OS level to properly handle them.

To stay with the example of the board/display, I'd think that the final
component of the board DT would implement a bridge. The first component
of the display DT would similarly implement a bridge. Now if we have a
way of chaining bridges and controlling a chain of bridges, then there
is no need for anything more complex than a plain phandle in a property
from the board bridge to the display bridge.

> > Whether this is described using a single phandle to the bridge or a more
> > complicated graph is irrelevant. What matters is that you get a phandle
> > to the bridge. The job of the operating system is to give drivers a way
> > to resolve that phandle to some object and provide an API to access that
> > object.
> 
> I agree it's not relevant whether we use a simple phandle or complex
> graph. What matter is that we have a standard way to express the video
> paths, which everybody uses.

Not necessarily. Consistency is always good, but I think simplicity
trumps consistency. What matters isn't how the phandle is referenced in
the device tree, what matters is that it is referenced and that it makes
sense in the context of the specific device. Anything else is the job of
the OS.

While there are probably legitimate cases where the video graph is
useful and makes sense, in many cases terms like ports and endpoints are
simply confusing.

Thierry

Attachment: pgpDEMieOp5cW.pgp
Description: PGP signature


[Index of Archives]     [Linux SoC Development]     [Linux Rockchip Development]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Linux SCSI]     [Yosemite News]

  Powered by Linux