Re: [PATCH drm 4/6] drm: add drm_gem_prime_page_flip() helper

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Oct 22, 2015 at 11:31 PM, Alexander Goins <agoins@xxxxxxxxxx> wrote:
>>This looks really strange - you force-complete the fence already attached (which might be from any driver attached to this dma-buf)?
>>The creator of the fence is supposed to complete it when whatever async operation that is scheduled and which is using this dma-buf is done. So the way to go about this here is to take out the excl fence from the reservation object, and wait for it to complete.
>
> Yeah, true. I had just added that as a contingency - in reality this ioctl is never called (by my changes) on a buffer with an unsignaled fence. You're right that a wait would be better. I'll have it changed the revised patch set if we end up keeping the ioctl.

It's not just that you should wait on the fence, there's no reason for
a page_flip to attach an exclusive fence at all - display block at
most reads from the buffer, we don't need to block anyone from reading
(e.g. to do a front->back copy before rendering the next frame).

>>But if your goal is to only fix up the page_flip path, and only for i915 as the display driver
>
> I'd be fine with the atomic modesetting path if i915 supported async atomic updates; really I just need a way to fence flips between two shared buffers. While i915 is the main target, we'd ideally like to support this with other devices/drivers as well.

Every other driver supporting dma-buf import of foreign objects should
just do the same. The only case you care about is probably tegra. Note
that atomic helpers already are prepared for both implicit and
explicit fences, there's plane_state->fence which they wait to
complete on. The idea is to fill this out in your plane->prepare_fb
hook.

>>For your limited use-case it really would be simplest to fix up the i915 mmio flip code to handle fences attached to dma-bufs, and teach the nvidia blob to attach a suitable exclusive fence somehow.
>
> This sounds like a pretty good solution - as long as the semantics stay the same as to who attaches the fence, transitioning from this to an atomic modesetting solution could be relatively painless. I'd need a way to query if a driver supports fencing on page_flip and/or async atomic modesetting in order to get graceful fallback on X.

Feature flags for kernel bugs are kinda not something I like all that
much. If you _really_ want it, we'll just backport the minimal fix and
that's it. Otherwise I gues carry a list of drivers/kernels that
work/dont in your X driver.

>>Wrt merge coordination: RSN (well that's the case since months) all the legacy pageflip code will be ripped out from i915 and replaced by async atomic updates). So except for a proof-of-concept I'd prefer if we just implement this in the atomic codepaths directly.
>
> What kind of timeline are we looking at for i915 supporting async atomic updates? Will the legacy pageflip code be ripped out at the same time, or will there be a transition period? With the ability to query for fenced page_flip / fenced async atomic modesetting, I can probably get X to use the best available method. I should be able to test a fenced async atomic update implementation using dGPU + Tegra PRIME prior to i915 supporting it, but I'd be interested to hear from those more well versed on that path as to how they'd like it implemented.

Legacy flip code will be ripped out as soon as async atomic works, and
implemented using atomic modeset code using the usual helpers. But
since it's a giantic task I don't want to add more features to the
legacy flip code, that'll only make it slower to get to the goal. At
least if you don't also implement the atomic side. So two patches:
1. for the mmio_flip code, which we could also maybe backport.
2. for the new atomic code, for future-proofing.

Both should be really minimal patches.

Cheers, Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
http://lists.freedesktop.org/mailman/listinfo/dri-devel




[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux