Re: [RFC PATCH 00/16] drm/rockchip: Rockchip EBC ("E-Book Controller") display driver

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Daniel,

Thanks for your feedback

On Wed, May 25, 2022 at 07:18:07PM +0200, Daniel Vetter wrote:
> > > VBLANK Events and Asynchronous Commits
> > > ======================================
> > > When should the VBLANK event complete? When the pixels have been blitted
> > > to the kernel's shadow buffer? When the first frame of the waveform is
> > > sent to the panel? When the last frame is sent to the panel?
> > > 
> > > Currently, the driver is taking the first option, letting
> > > drm_atomic_helper_fake_vblank() send the VBLANK event without waiting on
> > > the refresh thread. This is the only way I was able to get good
> > > performance with existing userspace.
> > 
> > I've been having the same kind of discussions in private lately, so I'm
> > interested by the answer as well :)
> > 
> > It would be worth looking into the SPI/I2C panels for this, since it's
> > basically the same case.
> 
> So it's maybe a bit misnamed and maybe kerneldocs aren't super clear (pls
> help improve them), but there's two modes:
> 
> - drivers which have vblank, which might be somewhat variable (VRR) or
>   become simulated (self-refresh panels), but otherwise is a more-or-less
>   regular clock. For this case the atomic commit event must match the
>   vblank events exactly (frame count and timestamp)

Part of my interrogation there is do we have any kind of expectation
on whether or not, when we commit, the next vblank is going to be the
one matching that commit or we're allowed to defer it by an arbitrary
number of frames (provided that the frame count and timestamps are
correct) ?

> - drivers which don't have vblank at all, mostly these are i2c/spi panels
>   or virtual hw and stuff like that. In this case the event simply happens
>   when the driver is done with refresh/upload, and the frame count should
>   be zero (since it's meaningless).
> 
> Unfortuantely the helper to dtrt has fake_vblank in it's name, maybe
> should be renamed to no_vblank or so (the various flags that control it
> are a bit better named).
> 
> Again the docs should explain it all, but maybe we should clarify them or
> perhaps rename that helper to be more meaningful.
> 
> > > Blitting/Blending in Software
> > > =============================
> > > There are multiple layers to this topic (pun slightly intended):
> > >  1) Today's userspace does not expect a grayscale framebuffer.
> > >     Currently, the driver advertises XRGB8888 and converts to Y4
> > >     in software. This seems to match other drivers (e.g. repaper).
> > >
> > >  2) Ignoring what userspace "wants", the closest existing format is
> > >     DRM_FORMAT_R8. Geert sent a series[4] adding DRM_FORMAT_R1 through
> > >     DRM_FORMAT_R4 (patch 9), which I believe are the "correct" formats
> > >     to use.
> > > 
> > >  3) The RK356x SoCs have an "RGA" hardware block that can do the
> > >     RGB-to-grayscale conversion, and also RGB-to-dithered-monochrome
> > >     which is needed for animation/video. Currently this is exposed with
> > >     a V4L2 platform driver. Can this be inserted into the pipeline in a
> > >     way that is transparent to userspace? Or must some userspace library
> > >     be responsible for setting up the RGA => EBC pipeline?
> > 
> > I'm very interested in this answer as well :)
> > 
> > I think the current consensus is that it's up to userspace to set this
> > up though.
> 
> Yeah I think v4l mem2mem device is the answer for these, and then
> userspace gets to set it all up.

I think the question wasn't really about where that driver should be,
but more about who gets to set it up, and if the kernel could have
some component to expose the formats supported by the converter, but
whenever a commit is being done pipe that to the v4l2 device before
doing a page flip.

We have a similar use-case for the RaspberryPi where the hardware
codec will produce a framebuffer format that isn't standard. That
format is understood by the display pipeline, and it can do
writeback.

However, some people are using a separate display (like a SPI display
supported by tinydrm) and we would still like to be able to output the
decoded frames there.

Is there some way we could plumb things to "route" that buffer through
the writeback engine to perform a format conversion before sending it
over to the SPI display automatically?

Maxime




[Index of Archives]     [Device Tree Compilter]     [Device Tree Spec]     [Linux Driver Backports]     [Video for Linux]     [Linux USB Devel]     [Linux PCI Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Yosemite Backpacking]


  Powered by Linux