On Mon, Jan 15, 2024 at 09:28:39AM +0100, Maxime Ripard wrote: > On Fri, Jan 12, 2024 at 03:42:18PM -0800, Anatoliy Klymenko wrote: > > Patches 1/4,2/4,3/4 are minor fixes. > > > > DPSUB requires input live video format to be configured. > > Patch 4/4: The DP Subsystem requires the input live video format to be > > configured. In this patch we are assuming that the CRTC's bus format is fixed > > and comes from the device tree. This is a proposed solution, as there are no api > > to query CRTC output bus format. > > > > Is this a good approach to go with? > > I guess you would need to expand a bit on what "live video input" is? Is > it some kind of mechanism to bypass memory and take your pixels straight > from a FIFO from another device, or something else? Yes and no. The DPSUB integrates DMA engines, a blending engine (two planes), and a DP encoder. The dpsub driver supports all of this, and creates a DRM device. The DP encoder hardware always takes its input data from the output of the blending engine. The blending engine can optionally take input data from a bus connected to the FPGA fabric, instead of taking it from the DPSUB internal DMA engines. When operating in that mode, the dpsub driver exposes the DP encoder as a bridge, and internally programs the blending engine to disable blending. Typically, the FPGA fabric will then contain a CRTC of some sort, with a driver that will acquire the DP encoder bridge as usually done. In this mode of operation, it is typical for the IP cores in FPGA fabric to be synthesized with a fixed format (as that saves resources), while the DPSUB supports multiple input formats. Bridge drivers in the upstream kernel work the other way around, with the bridge hardware supporting a limited set of formats, and the CRTC then being programmed with whatever the bridges chain needs. Here, the negotiation needs to go the other way around, as the CRTC is the limiting factor, not the bridge. Is this explanation clear ? -- Regards, Laurent Pinchart