On Fri, Sep 01, 2023 at 10:36:17AM +0200, Geert Uytterhoeven wrote: > Hi Maxime, > > On Fri, Sep 1, 2023 at 10:22 AM Maxime Ripard <mripard@xxxxxxxxxx> wrote: > > On Wed, Aug 30, 2023 at 08:25:08AM +0200, Javier Martinez Canillas wrote: > > > The commit 45b58669e532 ("drm/ssd130x: Allocate buffer in the plane's > > > .atomic_check() callback") moved the allocation of the intermediate and > > > HW buffers from the encoder's .atomic_enable callback to primary plane's > > > .atomic_check callback. > > > > > > This was suggested by Maxime Ripard because drivers aren't allowed to fail > > > after drm_atomic_helper_swap_state() has been called, and the encoder's > > > .atomic_enable happens after the new atomic state has been swapped. > > > > > > But that change caused a performance regression in very slow platforms, > > > since now the allocation happens for every plane's atomic state commit. > > > For example, Geert Uytterhoeven reports that is the case on a VexRiscV > > > softcore (RISC-V CPU implementation on an FPGA). > > > > I'd like to have numbers on that. It's a bit surprising to me that, > > given how many objects we already allocate during a commit, two small > > additional allocations affect performances so dramatically, even on a > > slow platform. > > To be fair, I didn't benchmark that. Perhaps it's just too slow due to > all these other allocations (and whatever else happens). > > I just find it extremely silly to allocate a buffer over and over again, > while we know that buffer is needed for each and every display update. Maybe it's silly, but I guess it depends on what you want to optimize for. You won't know the size of that buffer before you're in atomic_check. So it's a different trade-off than you would like, but I wouldn't call it extremely silly. Maxime
Attachment:
signature.asc
Description: PGP signature