Re: [PATCH 00/20] drm: Split out the formats API and move it to a common place

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Le mardi 23 avril 2019 à 17:02 +0100, Daniel Stone a écrit :
> Hi Laurent,
> 
> On Tue, 23 Apr 2019 at 16:54, Laurent Pinchart
> <laurent.pinchart@xxxxxxxxxxxxxxxx> wrote:
> > On Tue, Apr 23, 2019 at 09:59:37AM +0100, Daniel Stone wrote:
> > > On Tue, 23 Apr 2019 at 08:26, Daniel Vetter <daniel@xxxxxxxx> wrote:
> > > Totally. Let's take DRM_FORMAT_XRGB8888 + I915_FORMAT_MOD_Y_TILED as
> > > an example. [... details ...]
> > 
> > Looks like we have different kinds of metadata to consider. On the V4L2
> > side metadata usually refers to the context in which a frame was
> > captured, it doesn't tell how to interpret the contents of the pixels.
> > In the case you just described, the metadata is part of the frame
> > contents. I agree that this is a proper use case for storing such
> > metadata in a plane. What I wouldn't like to see being stored in a plane
> > is for instance gamma tables or similar data that configures the
> > processing pipeline in the display engine (I know we have an API for
> > gamma tables, this is just an example).
> > 
> > > It would be good to understand what you had in mind when you said that
> > > using multiple planes created a mess. I haven't touched media
> > > encode/decode units at a low level for quite a while (hooray for
> > > gst-v4l2!), but I remember that they often used padding areas around
> > > the buffer for scratch space - maybe motion vectors or similar? That
> > > case is quite different to something like CCS, since the data is only
> > > meaningful to the media engine and must be ignored (but preserved) by
> > > everyone else. Using multiple planes in that case isn't appropriate,
> > > since it's very specific to how that hardware unit deals with that
> > > buffer, instead of something that every consumer needs to understand
> > > in order to use it.
> > 
> > With metadata unrelated to the pixel content, using a separate plane in
> > the same buffer resulted in an explosion of the number of combinations
> > that we would need to support, and ultimately led to a very ill-defined
> > API. We decided to convey metadata related to the frame capture context
> > (e.g. what exposure time was used for the frame) and processing pipeline
> > configuration data in different buffers than the frame itself.
> 
> Yeah, that makes sense. It's not really that different from what
> happens with GPUs either: the final colour buffer the display
> controller gets from a game is the product of a _lot_ of other work
> which is invisible to the display controller, including things like
> depth and stencil buffers, command buffers, etc etc. Those are closely
> related to the frame production, but totally irrelevant for exchanging
> the colour buffer with other subsystems.
> 
> I think we should look at the metadata buffers you're describing in
> the same way. Perhaps each V4L2 buffer could have driver-private
> auxiliary buffer storage, or perhaps it's something you need to
> separately expose to userspace as auxiliary data which must be
> preserved but ignored. But modifiers are really only about what you
> need when exchanging image colour buffers between subsystems, not
> anything else.
> 
> You're pretty close with gamma tables as well; for HDR and other kinds
> of complex colour management, we need to carry a fair bit of auxiliary
> information in order to display the image correctly. These have quite
> different uses though: normally the colour buffer is produced by the
> hardware and the primaries/whitepoints/etc are produced by software,
> with the colour-management details remaining static across the life of
> a swapchain even as you flip between multiple buffers. Given that, it
> doesn't really make sense to try to stuff them into the same storage.

I agree that we need to keep things minimal and clearly distinguish
between what constitutes the description of the pixel buffer in terms
of how it's laid out in memory and information about how the data
should be interpreted.

And there's indeed a fair share of things to consider there. Adding to
the list above, I'm also thinking of the YUV colorspace information
which must be passed along when displaying a buffer.

But that's essentially not required to have a common description of
buffers unified accross subsystems. Thinking about that, it would be
interesting to have a common base structure for buffers used in v4l2
and drm. Ideally, that could be shared when doing dma-buf to avoid
having userspace describe buffers and memory each time a buffer is
imported into another subsystem. This could also help us solve the
ambiguity related to the YUV M-suffixed formats. Another idea could be
having common ioctls to get a unified buffer description from userspace
and e.g. mmap on a per-plane basis (with virtual mappings like DRM
does).

What do you think?

Cheers,

Paul




[Index of Archives]     [Linux Input]     [Video for Linux]     [Gstreamer Embedded]     [Mplayer Users]     [Linux USB Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Yosemite Backpacking]

  Powered by Linux