On Thu, Jun 02, 2022 at 09:22:46AM -0700, Matthew Brost wrote:
On Thu, Jun 02, 2022 at 08:42:13AM +0300, Lionel Landwerlin wrote:
On 02/06/2022 00:18, Matthew Brost wrote:
> On Wed, Jun 01, 2022 at 05:25:49PM +0300, Lionel Landwerlin wrote:
> > On 17/05/2022 21:32, Niranjana Vishwanathapura wrote:
> > > +VM_BIND/UNBIND ioctl will immediately start binding/unbinding the mapping in an
> > > +async worker. The binding and unbinding will work like a special GPU engine.
> > > +The binding and unbinding operations are serialized and will wait on specified
> > > +input fences before the operation and will signal the output fences upon the
> > > +completion of the operation. Due to serialization, completion of an operation
> > > +will also indicate that all previous operations are also complete.
> > I guess we should avoid saying "will immediately start binding/unbinding" if
> > there are fences involved.
> >
> > And the fact that it's happening in an async worker seem to imply it's not
> > immediate.
> >
> >
> > I have a question on the behavior of the bind operation when no input fence
> > is provided. Let say I do :
> >
> > VM_BIND (out_fence=fence1)
> >
> > VM_BIND (out_fence=fence2)
> >
> > VM_BIND (out_fence=fence3)
> >
> >
> > In what order are the fences going to be signaled?
> >
> > In the order of VM_BIND ioctls? Or out of order?
> >
> > Because you wrote "serialized I assume it's : in order
> >
> >
> > One thing I didn't realize is that because we only get one "VM_BIND" engine,
> > there is a disconnect from the Vulkan specification.
> >
> > In Vulkan VM_BIND operations are serialized but per engine.
> >
> > So you could have something like this :
> >
> > VM_BIND (engine=rcs0, in_fence=fence1, out_fence=fence2)
> >
> > VM_BIND (engine=ccs0, in_fence=fence3, out_fence=fence4)
> >
> Question - let's say this done after the above operations:
>
> EXEC (engine=ccs0, in_fence=NULL, out_fence=NULL)
>
> Is the exec ordered with respected to bind (i.e. would fence3 & 4 be
> signaled before the exec starts)?
>
> Matt
Hi Matt,
From the vulkan point of view, everything is serialized within an engine (we
map that to a VkQueue).
So with :
EXEC (engine=ccs0, in_fence=NULL, out_fence=NULL)
VM_BIND (engine=ccs0, in_fence=fence3, out_fence=fence4)
EXEC completes first then VM_BIND executes.
To be even clearer :
EXEC (engine=ccs0, in_fence=fence2, out_fence=NULL)
VM_BIND (engine=ccs0, in_fence=fence3, out_fence=fence4)
EXEC will wait until fence2 is signaled.
Once fence2 is signaled, EXEC proceeds, finishes and only after it is done, VM_BIND executes.
It would kind of like having the VM_BIND operation be another batch executed from the ringbuffer buffer.
Yea this makes sense. I think of VM_BINDs as more or less just another
version of an EXEC and this fits with that.
Note that VM_BIND itself can bind while and EXEC (GPU job) is running.
(Say, getting binds ready for next submission). It is up to user though,
how to use it.
In practice I don't think we can share a ring but we should be able to
present an engine (again likely a gem context in i915) to the user that
orders VM_BINDs / EXECs if that is what Vulkan expects, at least I think.
I have responded in the other thread on this.
Niranjana
Hopefully Niranjana + Daniel agree.
Matt
-Lionel
>
> > fence1 is not signaled
> >
> > fence3 is signaled
> >
> > So the second VM_BIND will proceed before the first VM_BIND.
> >
> >
> > I guess we can deal with that scenario in userspace by doing the wait
> > ourselves in one thread per engines.
> >
> > But then it makes the VM_BIND input fences useless.
> >
> >
> > Daniel : what do you think? Should be rework this or just deal with wait
> > fences in userspace?
> >
> >
> > Sorry I noticed this late.
> >
> >
> > -Lionel
> >
> >