Re: [PATCH v2 0/5] drm: Reuse temporary memory for format conversion

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Sep 20, 2023 at 04:24:26PM +0200, Thomas Zimmermann wrote:
> DRM's format-conversion helpers require temporary memory. Pass the
> buffer from the caller and keep it allocated over several calls. Allow
> the caller to preallocate the buffer memory.
> 
> The motivation for this patchset is the recent work on a DRM panic
> handler. The panic handler requires format conversion to display an
> error to the screen. But allocating memory during kernel panics is
> fragile. The changes in this patchset enable the DRM panic handler to
> preallocate buffer storage before the panic occurs.
> 
> As an additonal benefit, drivers can now keep the temporary storage
> across multiple display updates. Avoiding memory allocation reduces
> the CPU overhead of the format helpers.

This argument is getting a bit tiring. The main reason is that it isn't
one, and:

  - we allocate something in the 10-20 range objects at a given commit,
    so another small one is not insignificant.

  - If it was, it would be trivial to produce a benchmark, but no-one
    ever actually showed a workload and numbers where there's actually
    any difference.

  - Also, the CPU overhead is indeed (even if marginally) decreased, but
    the memory overhead is increased. I don't know whether that's a good
    trade-off or not, see the point above.

It really sounds like an empty statement to me: "But just think of the
CPU!".

That being said, I also have more fundamental doubts about this series.

The first one is that storing the buffer pointer in the device instead
of the state makes it harder to reason about. When you have a state, the
framework provides the guarantee at commit time that there's only going
to be one at a given time. And since the buffer is stored in that state
object, you can't access it by mistake. The buffer size also depends on
the state, so that all makes sense from a logical PoV.

If we store the buffer into the device, then suddenly you have to think
about whether there's multiple CRTCs or not (since commits aren't
serialised if they affect different CRTCs), whether the buffer size you
allocated is large enough now for your current resolution and format,
etc. It adds a decent chunk of complexity on something that was quite
complex already.

I understand that the motivation is for drm_panic to have a buffer ready
when it has to kick in. But why don't we just allocate (and check) a
spare commit at probe time so we just have to commit it when we panic.

That would fall nicely into the rest of the atomic modesetting code, and
since we pretty much require not to allocate anything during
atomic_commit, we have that constraints already figured out.

Maxime

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux