Hi,
On 19/01/2025 13:29, Sui Jingfeng wrote:
Hi,
On 2025/1/16 18:35, Dmitry Baryshkov wrote:
On Thu, Jan 16, 2025 at 11:17:50AM +0100, Geert Uytterhoeven wrote:
On Thu, Jan 16, 2025 at 11:03 AM Tomi Valkeinen
<tomi.valkeinen@xxxxxxxxxxxxxxxx> wrote:
On 16/01/2025 10:09, Thomas Zimmermann wrote:
Am 15.01.25 um 15:20 schrieb Tomi Valkeinen:
[...]
My point is that we have the current UAPI, and we have userspace
using
it, but we don't have clear rules what the ioctl does with specific
parameters, and we don't document how it has to be used.
Perhaps the situation is bad, and all we can really say is that
CREATE_DUMB only works for use with simple RGB formats, and the
behavior for all other formats is platform specific. But I think even
that would be valuable in the UAPI docs.
To be honest, I would not want to specify behavior for anything but
the
linear RGB formats. If anything, I'd take Daniel's reply mail for
documentation as-is. Anyone stretching the UAPI beyond RGB is on
their own.
Thinking about this, I wonder if this change is good for omapdrm or
xilinx (probably other platforms too that support non-simple non-RGB
formats via dumb buffers): without this patch, in both drivers, the
pitch calculations just take the bpp as bit-per-pixels, align it up,
and that's it.
With this patch we end up using drm_driver_color_mode_format(), and
aligning buffers according to RGB formats figured out via heuristics.
It does happen to work, for the formats I tested, but it sounds like
something that might easily not work, as it's doing adjustments based
on wrong format.
Should we have another version of drm_mode_size_dumb() which just
calculates using the bpp, without the drm_driver_color_mode_format()
path? Or does the drm_driver_color_mode_format() path provide some
value for the drivers that do not currently do anything similar?
With the RGB-only rule, using drm_driver_color_mode_format() makes
sense. It aligns dumb buffers and video=, provides error checking, and
overall harmonizes code. The fallback is only required because of the
existing odd cases that already bend the UAPI's rules.
I have to disagree here.
On the platforms I have been using (omap, tidss, xilinx, rcar) the dumb
buffers are the only buffers you can get from the DRM driver. The dumb
buffers have been used to allocate linear and multiplanar YUV buffers
for a very long time on those platforms.
I tried to look around, but I did not find any mentions that
CREATE_DUMB
should only be used for RGB buffers. Is anyone outside the core
developers even aware of it?
If we don't use dumb buffers there, where do we get the buffers? Maybe
from a v4l2 device or from a gpu device, but often you don't have
those.
DMA_HEAP is there, of course.
Why can't there be a variant that takes a proper fourcc format
instead of
an imprecise bpp value?
Backwards compatibility. We can add an IOCTL for YUV / etc.
[...]
But userspace must be able to continue allocating YUV buffers through
CREATE_DUMB.
I think, allocating YUV buffers through CREATE_DUMB interface is just
an *abuse* and *misuse* of this API for now.
Take the NV12 format as an example, NV12 is YUV420 planar format, have
two planar: the Y-planar and the UV-planar. The Y-planar appear first
in memory as an array of unsigned char values. The Y-planar is followed
immediately by the UV-planar, which is also an array of unsigned char
values that contains packed U (Cb) and V (Cr) samples.
But the 'drm_mode_create_dumb' structure is only intend to provide
descriptions for *one* planar.
struct drm_mode_create_dumb {
__u32 height;
__u32 width;
__u32 bpp;
__u32 flags;
__u32 handle;
__u32 pitch;
__u64 size;
};
An width x height NV12 image takes up width*height*(1 + 1/4 + 1/4) bytes.
So we can allocate an *equivalent* sized buffer to store the NV12 raw data.
Either 'width * (height * 3/2)' where each pixel take up 8 bits,
or just 'with * height' where each pixels take up 12 bits.
However, all those math are just equivalents description to the original
NV12 format, neither are concrete correct physical description.
I don't see the problem. Allocating dumb buffers, if we don't have any
heuristics related to RGB behind it, is essentially just allocating a
specific amount of memory, defined by width, height and bitsperpixel.
If I want to create an NV12 framebuffer, I allocate two dumb buffers,
one for Y and one for UV planes, and size them accordingly. And then
create the DRM framebuffer with those.
Therefore, allocating YUV buffers through the dumb interface is just an
abuse for that API. We certainly can abuse more by allocating two dumb
buffers, one for Y-planer, another one for the UV-planer. But again,dumb
buffers can be (and must be) used for *scanout* directly. What will
yield if I commit the YUV buffers you allocated to the CRTC directly?
You'll see it on the screen? I don't understand your point here...
In other words, You can allocated buffers via the dumb APIs to store
anything,
but the key point is that how can we interpret it.
As Daniel puts it, the semantics of that API is well defined for simple RGB
formats. Usages on non linear RGB dumb buffers are considered as undefined
behavior.
Peoples can still abusing it at the user-space though, but the kernel don't
have to guarantee that the user-space *must* to be able to continue doing
balabala..., That's it.
I have hard time understanding the "abuse" argument. But in any case,
the API has been working like this for who knows how long, and used
widely (afaik). The question is, do we break it or not. Granted, this
series doesn't break it as such, but it adds heuristics that wasn't
there before, and it could affect the behavior. If we still want to do
that, I want to understand what is the benefit, because there's a
potential to cause regressions.
Tomi