Re: [rfc repost] drm sync objects - a new beginning (make ickle happier?)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 19 April 2017 at 22:07, Christian König <deathsimple@xxxxxxxxxxx> wrote:
> Am 13.04.2017 um 03:41 schrieb Dave Airlie:
>>
>> Okay I've taken Chris's suggestions to heart and reworked things
>> around a sem_file to see how they might look.
>>
>> This means the drm_syncobj are currently only useful for semaphores,
>> the flags field could be used in future to use it for other things,
>> and we can reintroduce some of the API then if needed.
>>
>> This refactors sync_file first to add some basic rcu wrappers
>> about the fence pointer, as this point never updates this should
>> all be fine unlocked.
>>
>> It then creates the sem_file with a mutex, and uses that to
>> track the semaphores with reduced fops and the replace and
>> get APIs.
>>
>> Then it reworks the drm stuff on top, and fixes amdgpu bug
>> with old_fence.
>>
>> Let's see if anyone prefers one approach over the other.
>
>
> Yeah, I clearly prefer keeping only one object type for synchronization in
> the kernel.
>
> As I wrote in the other mail the argument of using the sync file for
> semaphores was to be able to use it as in fence with the atomic mode setting
> as well.
>
> That a wait consumes a previous signal should be a specific behavior of the
> operation and not the property of the object.
>
> In other words I'm fine with using the sync_file in a 1:1 fashion with
> Vulkan, but for the atomic API we probably want 1:N to be able to flip a
> rendering result on multiple CRTCs at the same time.

Well ideally atomic modesetting should be moved to using syncobjects
as an option.

I'd rather sync_files were limited in scope to interaction with non-drm drivers,
and possibly interprocess operations, consuming fd's is bad and merging doesn't
really fix that.

I'm starting to narrow down to what I think the sync_obj needs to do, and I'm
contemplating a bit more something like the following:

a) no file backing, a simple kref object that gets tracked in an idr
(like a gem object).
This object can have an optional fence attached to it. If there is no
fence, it's unsignalled,
if there is a fence it's signalled. We should provide an interface to
wait on multiple of
these objects that can support Vulkan fence waiting api. We should use
these objects
in command submission interfaces to provide Vulkan fence and semaphore APIs.
These objects will never be grouped or have multiple fences in them.
>From a kernel
API we can have multiple waiters and shouldn't enforce the Vulkan semaphore 1:1
at this level, userspace can just create it's own semantics on top.
Open:
These objects are created in advance and then filled in by command
submission ioctls,
or do we want the option for a command submission ioctl to return a
newly created one
of these?

b) sharing of sync objects.
By default we provide a sync_obj->fd conversion, this fd is opaque and
can be passed
between processes to implement things like Vulkan semaphore passing.

c) interoperation with sync_file.
We should provide a mechanism to move a group of sync objects into a sync_file,
so it can be passed into sync_file APIs. We should provide a mechanism
to map a sync_file
into a group of sync objects. This will be a possible 1:n conversion.
Open:
Does the sync_file->syncobj operation create sync objects? or do we
need to pass in a preallocated group?
Does the sync_file->syncobj or syncobj->sync_file operations have any
side effects? I'm tending towards not,
(i.e. no signalling or anything else).

Dave.
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel




[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux