Am 11.09.2017 um 17:13 schrieb Maarten Lankhorst:
Op 11-09-17 om 16:45 schreef Christian König:
Am 11.09.2017 um 15:56 schrieb Maarten Lankhorst:
Op 11-09-17 om 14:53 schreef Christian König:
Am 10.09.2017 um 09:30 schrieb Maarten Lankhorst:
[SNIP]
To be honest that looks rather ugly to me for not much gain.
Additional to that we loose the optimization I've stolen from the wait function.
Right now your version does exactly the same as reservation_object_get_fences_rcu,
but with a reservation_object_list instead of a fence array.
Well then please take a closer look again:
for (i = 0; i < src_list->shared_count; ++i) {
struct dma_fence *fence;
fence = rcu_dereference(src_list->shared[i]);
if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT,
&fence->flags))
continue;
if (!dma_fence_get_rcu(fence)) {
kfree(dst_list);
src_list = rcu_dereference(src->fence);
goto retry;
}
if (dma_fence_is_signaled(fence)) {
dma_fence_put(fence);
continue;
}
dst_list->shared[dst_list->shared_count++] = fence;
}
We only take fences into the new reservation list when they aren't
already signaled.
This can't be added to reservation_object_get_fences_rcu() because that
would break VM handling on radeon and amdgpu.
Regards,
Christian.
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel