Re: [PATCH 0/3] drm/tegra: Add support for fence FDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Quoting Thierry Reding (2018-01-12 15:14:38)
> On Fri, Jan 12, 2018 at 10:40:16AM +0000, Chris Wilson wrote:
> > Quoting Thierry Reding (2018-01-11 22:22:46)
> > > From: Thierry Reding <treding@xxxxxxxxxx>
> > > 
> > > This set of patches adds support for fences to Tegra DRM and complements
> > > the fence FD support for Nouveau. Technically this isn't necessary for a
> > > fence-based synchronization loop with Nouveau because the KMS core takes
> > > care of all that, but engines behind host1x can use the IOCTL extensions
> > > provided here to emit fence FDs that in turn can be used to synchronize
> > > their jobs with either the scanout engine or the GPU.
> > 
> > Whilst hooking up fences, I advise you to also hook up drm_syncobj.
> > Internally they each resolve to another fence, so the mechanics are
> > identical, you just need another array in the uABI for in/out syncobj.
> > The advantage of drm_syncobj is that userspace can track internal fences
> > using inexhaustible handles, reserving the precious fd for IPC or KMS.
> 
> I'm not sure that I properly understand how to use these. It looks as if
> they are better fence FDs, so in case where you submit internal work you
> would go with a drm_syncobj and when you need access to the fence from a
> different process or driver, you should use an FD.

Yes, simply put they are better fence fds.

> Doesn't this mean we can cover this by just adding a flag that marks the
> fence as being a handle or an FD? Do we have situations where we want an
> FD *and* a handle returned as result of the job submission?

Probably not, but if you don't need to force userspace to choose, they
will come up with a situation where it is useful. Though one thing to
consider with the drm_syncobj is that you will want to handle an array
of in/out fences, as userspace will pass in an array of VkSemaphore (or
whatever) rather than compute a singular dma_fence_array by merging.
 
> For the above it would suffice to add two additional flags:
> 
>         #define DRM_TEGRA_SUBMIT_WAIT_SYNCOBJ (1 << 2)
>         #define DRM_TEGRA_SUBMIT_EMIT_SYNCOBJ (1 << 3)
> 
> which would even allow both to be combined:
> 
>         DRM_TEGRA_SUBMIT_WAIT_SYNCOBJ | DRM_TEGRA_SUBMIT_EMIT_FENCE_FD
> 
> would allow the job to wait for an internal syncobj (defined by handle
> in the fence member) and return a fence (as FD in the fence member) to
> pass on to another process or driver as prefence.

Would be easy, if you are happy with the limitation of just a single
wait-fence.
-Chris
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel




[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux