On 15/07/2019 14:30, Koenig, Christian wrote:
Hi Lionel,
sorry for the delayed response, I'm just back from vacation.
Am 03.07.19 um 11:17 schrieb Lionel Landwerlin:
On 03/07/2019 11:56, Chris Wilson wrote:
Quoting Lionel Landwerlin (2019-07-01 12:34:33)
+ syncobj = drm_syncobj_find(eb->file,
user_fence.handle);
+ if (!syncobj) {
+ DRM_DEBUG("Invalid syncobj handle provided\n");
+ err = -EINVAL;
+ goto err;
+ }
+
+ if (user_fence.flags & I915_EXEC_FENCE_WAIT) {
+ fence = drm_syncobj_fence_get(syncobj);
+ if (!fence) {
+ DRM_DEBUG("Syncobj handle has no
fence\n");
+ drm_syncobj_put(syncobj);
+ err = -EINVAL;
+ goto err;
+ }
+
+ err = dma_fence_chain_find_seqno(&fence,
point);
I'm very dubious about chain_find_seqno().
It returns -EINVAL if the point is older than the first in the chain --
it is in an unknown state, but may be signaled since we remove signaled
links from the chain. If we are waiting for an already signaled syncpt,
we should not be erring out!
You're right, I got this wrong.
We can get fence = NULL if the point is already signaled.
The easiest would be to skip it from the list, or add the stub fence.
I guess the CTS got lucky that it always got the point needed before
it was garbage collected...
The topmost point is never garbage collected. So IIRC the check is
actually correct and you should never get NULL here.
Do we allow later requests to insert earlier syncpt into the chain? If
so, then the request we wait on here may be woefully inaccurate and
quite easily lead to cycles in the fence tree. We have no way of
resolving such deadlocks -- we would have to treat this fence as a
foreign fence and install a backup timer. Alternatively, we only allow
this to return the exact fence for a syncpt, and proxies for the rest.
Adding points < latest added point is forbidden.
I wish we enforced it a bit more than what's currently done in
drm_syncobj_add_point().
In my view we should :
- lock the syncobj in get_timeline_fence_array() do the sanity
check there.
- keep the lock until we add the point to the timeline
- unlock once added
That way we would ensure that the application cannot generate invalid
timelines and error out if it does.
We could do the same for host signaling in
drm_syncobj_timeline_signal_ioctl/drm_syncobj_transfer_to_timeline
(there the locking a lot shorter).
That requires holding the lock for longer than maybe other driver
would prefer.
Ccing Christian who can tell whether that's out of question for AMD.
Yeah, adding the lock was the only other option I could see as well, but
we intentionally decided against that.
Since we have multiple out sync objects we would need to use a ww_mutex
as lock here.
That in turn would result in a another rather complicated dance for
deadlock avoidance. Something which each driver would have to implement
correctly.
That doesn't sounds like a good idea to me just to improve error checking.
As long as it is only in the same process userspace could check that as
well before doing the submission.
Thanks Christian,
Would you be opposed to exposing an _locked() version of
drm_syncobj_add_point() and have a static inline do the locking?
I don't think it would be a difference for your driver and we could add
checking with a proxy fence Chris suggested on our side.
We could also allow do checks in drm_syncobj_timeline_signal_ioctl().
-Lionel
Regards,
Christian.
Cheers,
-Lionel
+ if (err || !fence) {
+ DRM_DEBUG("Syncobj handle missing
requested point\n");
+ drm_syncobj_put(syncobj);
+ err = err != 0 ? err : -EINVAL;
+ goto err;
+ }
+ }
+
+ /*
+ * For timeline syncobjs we need to preallocate
chains for
+ * later signaling.
+ */
+ if (point != 0 && user_fence.flags &
I915_EXEC_FENCE_SIGNAL) {
+ fences[n].chain_fence =
+ kmalloc(sizeof(*fences[n].chain_fence),
+ GFP_KERNEL);
+ if (!fences[n].chain_fence) {
+ dma_fence_put(fence);
+ drm_syncobj_put(syncobj);
+ err = -ENOMEM;
+ DRM_DEBUG("Unable to alloc
chain_fence\n");
+ goto err;
+ }
What happens if we later try to insert two fences for the same syncpt?
Should we not reserve the slot in the chain to reject duplicates?
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx