Re: [PATCH 0/4] Fixing live video input in ZynqMP DPSUB

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Laurent and Maxime,

Laurent, thank you very much for clear and comprehensive description of the "live video input" feature.

Maxime, sure, I will elaborate more in the next version of cover letter.

> Date: Wed, 17 Jan 2024 16:23:43 +0200
> From: Laurent Pinchart <laurent.pinchart@xxxxxxxxxxxxxxxx>
> To: Maxime Ripard <mripard@xxxxxxxxxx>
> Cc: Anatoliy Klymenko <anatoliy.klymenko@xxxxxxx>,
>         maarten.lankhorst@xxxxxxxxxxxxxxx, tzimmermann@xxxxxxx,
>         airlied@xxxxxxxxx, daniel@xxxxxxxx, michal.simek@xxxxxxx,
>         dri-devel@xxxxxxxxxxxxxxxxxxxxx, linux-arm-kernel@xxxxxxxxxxxxxxxxxxx,
>         linux-kernel@xxxxxxxxxxxxxxx
> Subject: Re: [PATCH 0/4] Fixing live video input in ZynqMP DPSUB
> Message-ID: <20240117142343.GD17920@xxxxxxxxxxxxxxxxxxxxxxxxxx>
> Content-Type: text/plain; charset=utf-8
> 
> On Mon, Jan 15, 2024 at 09:28:39AM +0100, Maxime Ripard wrote:
> > On Fri, Jan 12, 2024 at 03:42:18PM -0800, Anatoliy Klymenko wrote:
> > > Patches 1/4,2/4,3/4 are minor fixes.
> > >
> > > DPSUB requires input live video format to be configured.
> > > Patch 4/4: The DP Subsystem requires the input live video format to
> > > be configured. In this patch we are assuming that the CRTC's bus
> > > format is fixed and comes from the device tree. This is a proposed
> > > solution, as there are no api to query CRTC output bus format.
> > >
> > > Is this a good approach to go with?
> >
> > I guess you would need to expand a bit on what "live video input" is?
> > Is it some kind of mechanism to bypass memory and take your pixels
> > straight from a FIFO from another device, or something else?
> 
> Yes and no.
> 
> The DPSUB integrates DMA engines, a blending engine (two planes), and a DP
> encoder. The dpsub driver supports all of this, and creates a DRM device. The DP
> encoder hardware always takes its input data from the output of the blending
> engine.
> 
> The blending engine can optionally take input data from a bus connected to the
> FPGA fabric, instead of taking it from the DPSUB internal DMA engines. When
> operating in that mode, the dpsub driver exposes the DP encoder as a bridge, and
> internally programs the blending engine to disable blending. Typically, the FPGA
> fabric will then contain a CRTC of some sort, with a driver that will acquire the DP
> encoder bridge as usually done.
> 
> In this mode of operation, it is typical for the IP cores in FPGA fabric to be
> synthesized with a fixed format (as that saves resources), while the DPSUB
> supports multiple input formats. Bridge drivers in the upstream kernel work the
> other way around, with the bridge hardware supporting a limited set of formats,
> and the CRTC then being programmed with whatever the bridges chain needs.
> Here, the negotiation needs to go the other way around, as the CRTC is the
> limiting factor, not the bridge.
> 
> Is this explanation clear ?
> 
> --
> Regards,
> 
> Laurent Pinchart
> 
> 

Thank you,
Anatoliy




[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux