On Friday, November 10th, 2023 at 15:01, Maxime Ripard <mripard@xxxxxxxxxx> wrote: > On Fri, Nov 10, 2023 at 11:21:15AM +0000, Simon Ser wrote: > > > On Thursday, November 9th, 2023 at 20:17, Maxime Ripard mripard@xxxxxxxxxx wrote: > > > > > > Can we add another function pointer to the struct drm_driver (or > > > > similar) to do the allocation, and move the actual dmabuf handling > > > > code into the core? > > > > > > Yeah, I agree here, it just seems easier to provide a global hook and a > > > somewhat sane default to cover all drivers in one go (potentially with > > > additional restrictions, like only for modeset-only drivers or > > > something). > > > > First off not all drivers are using the GEM DMA helpers (e.g. vc4 with > > !vc5 does not). > > Right. vc4 pre-RPi4 is the exception though, so it's kind of what I > meant by providing sane defaults: the vast majority of drivers are using > GEM DMA helpers, and we should totally let drivers override that if > relevant. > > Basically, we already have 2.5 citizen classes, I'd really like to avoid > having 3 officially, even more so if it's not super difficult to do. > > > The heap code in this patch only works with drivers leveraging GEM DMA > > helpers. > > We could add a new hook to drm_driver to allocate heaps, link to a > default implementation in DRM_GEM_DMA_DRIVER_OPS_WITH_DUMB_CREATE(), and > then use that new hook in your new heap. It would already cover 40 > drivers at the moment, instead of just one, with all of them (but > atmel-hlcdc maybe?) being in the same situation than RPi4-vc4 is. As said in another e-mail, I really think the consequences of DMA heaps need to be thought out on a per-driver basis. Moreover this approach makes core DRM go in the wrong direction, deeper in midlayer territory. > > Then maybe it's somewhat simple to cover typical display devices found > > on split render/display SoCs, however for the rest it's going to be much > > more complicated. For x86 typically there are multiple buffer placements > > supported by the GPU and we need to have one heap per possible placement. > > And then figuring out all of the rules, priority and compatibility stuff > > is a whole other task in and of itself. > > But x86 typically doesn't have a split render/display setup, right? So you're saying we should fix everything at once, but why is x86 not part of "everything" then? x86 also has issues regarding buffer placement. You're saying you don't want to fragment the ecosystem, but it seems like there would still be "fragmentation" between split render/display SoCs and the rest? I'm having a hard time understanding your goals here.