On Wed, Jun 01, 2022 at 01:28:36PM -0700, Matthew Brost wrote:
On Wed, Jun 01, 2022 at 05:25:49PM +0300, Lionel Landwerlin wrote:
On 17/05/2022 21:32, Niranjana Vishwanathapura wrote:
> +VM_BIND/UNBIND ioctl will immediately start binding/unbinding the mapping in an
> +async worker. The binding and unbinding will work like a special GPU engine.
> +The binding and unbinding operations are serialized and will wait on specified
> +input fences before the operation and will signal the output fences upon the
> +completion of the operation. Due to serialization, completion of an operation
> +will also indicate that all previous operations are also complete.
I guess we should avoid saying "will immediately start binding/unbinding" if
there are fences involved.
And the fact that it's happening in an async worker seem to imply it's not
immediate.
Ok, will fix.
This was added because in earlier design binding was deferred until next execbuff.
But now it is non-deferred (immediate in that sense). But yah, this is confusing
and will fix it.
I have a question on the behavior of the bind operation when no input fence
is provided. Let say I do :
VM_BIND (out_fence=fence1)
VM_BIND (out_fence=fence2)
VM_BIND (out_fence=fence3)
In what order are the fences going to be signaled?
In the order of VM_BIND ioctls? Or out of order?
Because you wrote "serialized I assume it's : in order
Yes, in the order of VM_BIND/UNBIND ioctls. Note that bind and unbind will use
the same queue and hence are ordered.
One thing I didn't realize is that because we only get one "VM_BIND" engine,
there is a disconnect from the Vulkan specification.
In Vulkan VM_BIND operations are serialized but per engine.
So you could have something like this :
VM_BIND (engine=rcs0, in_fence=fence1, out_fence=fence2)
VM_BIND (engine=ccs0, in_fence=fence3, out_fence=fence4)
fence1 is not signaled
fence3 is signaled
So the second VM_BIND will proceed before the first VM_BIND.
I guess we can deal with that scenario in userspace by doing the wait
ourselves in one thread per engines.
But then it makes the VM_BIND input fences useless.
Daniel : what do you think? Should be rework this or just deal with wait
fences in userspace?
My opinion is rework this but make the ordering via an engine param optional.
e.g. A VM can be configured so all binds are ordered within the VM
e.g. A VM can be configured so all binds accept an engine argument (in
the case of the i915 likely this is a gem context handle) and binds
ordered with respect to that engine.
This gives UMDs options as the later likely consumes more KMD resources
so if a different UMD can live with binds being ordered within the VM
they can use a mode consuming less resources.
I think we need to be careful here if we are looking for some out of
(submission) order completion of vm_bind/unbind.
In-order completion means, in a batch of binds and unbinds to be
completed in-order, user only needs to specify in-fence for the
first bind/unbind call and the our-fence for the last bind/unbind
call. Also, the VA released by an unbind call can be re-used by
any subsequent bind call in that in-order batch.
These things will break if binding/unbinding were to be allowed to
go out of order (of submission) and user need to be extra careful
not to run into pre-mature triggereing of out-fence and bind failing
as VA is still in use etc.
Also, VM_BIND binds the provided mapping on the specified address space
(VM). So, the uapi is not engine/context specific.
We can however add a 'queue' to the uapi which can be one from the
pre-defined queues,
I915_VM_BIND_QUEUE_0
I915_VM_BIND_QUEUE_1
...
I915_VM_BIND_QUEUE_(N-1)
KMD will spawn an async work queue for each queue which will only
bind the mappings on that queue in the order of submission.
User can assign the queue to per engine or anything like that.
But again here, user need to be careful and not deadlock these
queues with circular dependency of fences.
I prefer adding this later an as extension based on whether it
is really helping with the implementation.
Daniel, any thoughts?
Niranjana
Matt
Sorry I noticed this late.
-Lionel