Re: [RFC]: shmem fd for non-DMA buffer sharing cross drivers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 8/25/23 15:40, Pekka Paalanen wrote:
Subject:
Re: [RFC]: shmem fd for non-DMA buffer sharing cross drivers
From:
Pekka Paalanen <ppaalanen@xxxxxxxxx>
Date:
8/25/23, 15:40

To:
Hsia-Jun Li <Randy.Li@xxxxxxxxxxxxx>
CC:
Tomasz Figa <tfiga@xxxxxxxxxxxx>, linux-mm@xxxxxxxxx, dri-devel@xxxxxxxxxxxxxxxxxxxxx, Linux Media Mailing List <linux-media@xxxxxxxxxxxxxxx>, hughd@xxxxxxxxxx, akpm@xxxxxxxxxxxxxxxxxxxx, Simon Ser <contact@xxxxxxxxxxx>, Hans Verkuil <hverkuil-cisco@xxxxxxxxx>, daniels@xxxxxxxxxxxxx, ayaka <ayaka@xxxxxxxxxxx>, linux-kernel@xxxxxxxxxxxxxxx, Nicolas Dufresne <nicolas@xxxxxxxxxxxx>


On Wed, 23 Aug 2023 15:11:23 +0800
Hsia-Jun Li<Randy.Li@xxxxxxxxxxxxx>  wrote:

On 8/23/23 12:46, Tomasz Figa wrote:
CAUTION: Email originated externally, do not click links or open attachments unless you recognize the sender and know the content is safe.


Hi Hsia-Jun,

On Tue, Aug 22, 2023 at 8:14 PM Hsia-Jun Li<Randy.Li@xxxxxxxxxxxxx>  wrote:
Hello

I would like to introduce a usage of SHMEM slimier to DMA-buf, the major
purpose of that is sharing metadata or just a pure container for cross
drivers.

We need to exchange some sort of metadata between drivers, likes dynamic
HDR data between video4linux2 and DRM.
If the metadata isn't too big, would it be enough to just have the
kernel copy_from_user() to a kernel buffer in the ioctl code?
Or the graphics frame buffer is
too complex to be described with plain plane's DMA-buf fd.
An issue between DRM and V4L2 is that DRM could only support 4 planes
while it is 8 for V4L2. It would be pretty hard for DRM to expend its
interface to support that 4 more planes which would lead to revision of
many standard likes Vulkan, EGL.
Could you explain how a shmem buffer could be used to support frame
buffers with more than 4 planes?
If you are asking why we need this:
1. metadata likes dynamic HDR tone data
2. DRM also challenges with this problem, let me quote what sima said:
"another trick that we iirc used for afbc is that sometimes the planes
have a fixed layout
like nv12
and so logically it's multiple planes, but you only need one plane slot
to describe the buffer
since I think afbc had the "we need more than 4 planes" issue too"

Unfortunately, there are vendor pixel formats are not fixed layout.

3. Secure(REE, trusted video piepline) info.

For how to assign such metadata data.
In case with a drm fb_id, it is simple, we just add a drm plane property
for it. The V4L2 interface is not flexible, we could only leave into
CAPTURE request_fd as a control.
Also, there is no reason to consume a device's memory for the content
that device can't read it, or wasting an entry of IOMMU for such data.
That's right, but DMA-buf doesn't really imply any of those. DMA-buf
is just a kernel object with some backing memory. It's up to the
allocator to decide how the backing memory is allocated and up to the
importer on whether it would be mapped into an IOMMU.
I just want to say it can't be allocated at the same place which was for
those DMA bufs(graphics or compressed bitstream).
This also could be answer for your first question, if we place this kind
of buffer in a plane for DMABUF(importing) in V4L2, V4L2 core would try
to prepare it, which could map it into IOMMU.

Usually, such a metadata would be the value should be written to a
hardware's registers, a 4KiB page would be 1024 items of 32 bits registers.

Still, I have some problems with SHMEM:
1. I don't want the userspace modify the context of the SHMEM allocated
by the kernel, is there a way to do so?
This is generally impossible without doing any of the two:
1) copying the contents to an internal buffer not accessible to the
userspace, OR
2) modifying any of the buffer mappings to read-only

2) can actually be more costly than 1) (depending on the architecture,
data size, etc.), so we shouldn't just discard the option of a simple
copy_from_user() in the ioctl.
I don't want the userspace access it at all. So that won't be a problem.
Hi,

if userspace cannot access things like an image's HDR metadata, then it
will be impossible for userspace to program KMS to have the correct
color pipeline, or to send intended HDR metadata to a video sink.

You cannot leave userspace out of HDR metadata handling, because quite
probably the V4L2 buffer is not the only thing on screen. That means
there must composition of multiple sources with different image
properties and metadata, which means it is no longer obvious what HDR
metadata should be sent to the video sink.

Even if it is a TV-like application rather than a windowed desktop, you
will still have other contents to composite: OSD (volume indicators,
channels indicators, program guide, ...), sub-titles, channel logos,
notifications... These components ideally should not change their
appearance arbitrarily with the main program content and metadata
changes. Either the metadata sent to the video sink is kept static and
the main program adapted on the fly, or main program metadata is sent
to the video sink and the additional content is adapted on the fly.

There is only one set of HDR metadata and one composited image that can
be sent to a video sink, so both must be chosen and produced correctly
at the source side. This cannot be done automatically inside KMS kernel
drivers.

There may be some misunderstanding.
Let suppose this HDR data is in a vendor specific format.
Both upstream(decoder) and downstream(DRM) hardware devices are coming from the same vendor. Then we just need to delivery the reference to this metadata buffer from the upstream to downstream, both of drivers know how to handle it.

Despite the userspace, we just need to extend a wayland protocol that making wayland compositor know how to receive the reference to the metadata and set it to the DRM plane.

If you want a common HDR formats for all HDR variants(HDR10+, DV), I am not against it. But it won't make the userspace be able to fill the HDR metadata even the HDR data comes from the bitstream(likes SEI). We must consider the case of Secure Video Path(Digital Right), the bitstream is not accessible from (REE) userspace nor linux kernel, the downstream must take what the upstream feed.

Thanks,
pq

2. Should I create a helper function for installing the SHMEM file as a fd?
We already have the udmabuf device [1] to turn a memfd into a DMA-buf,
so maybe that would be enough?

[1]https://elixir.bootlin.com/linux/v6.5-rc7/source/drivers/dma-buf/udmabuf.c
It is the kernel driver that allocate this buffer. For example, v4l2
CAPTURE allocate a buffer for metadata when VIDIOC_REQBUFS.
Or GBM give you a fd which is assigned with a surface.

So we need a kernel interface.
Best,
Tomasz
--
Hsia-Jun(Randy) Li


--
Hsia-Jun(Randy) Li




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux