Re: [RFC PATCH 02/29] dma-fence: Add dma_fence_user_fence

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Nov 21, 2024 at 10:31:16AM +0100, Christian König wrote:
> Am 20.11.24 um 23:50 schrieb Matthew Brost:
> > On Wed, Nov 20, 2024 at 02:38:49PM +0100, Christian König wrote:
> > > Am 19.11.24 um 00:37 schrieb Matthew Brost:
> > > > Normalize user fence attachment to a DMA fence. A user fence is a simple
> > > > seqno write to memory, implemented by attaching a DMA fence callback
> > > > that writes out the seqno. Intended use case is importing a dma-fence
> > > > into kernel and exporting a user fence.
> > > > 
> > > > Helpers added to allocate, attach, and free a dma_fence_user_fence.
> > > > 
> > > > Cc: Dave Airlie <airlied@xxxxxxxxxx>
> > > > Cc: Simona Vetter <simona.vetter@xxxxxxxx>
> > > > Cc: Christian Koenig <christian.koenig@xxxxxxx>
> > > > Signed-off-by: Matthew Brost <matthew.brost@xxxxxxxxx>
> > > > ---
> > > >    drivers/dma-buf/Makefile               |  2 +-
> > > >    drivers/dma-buf/dma-fence-user-fence.c | 73 ++++++++++++++++++++++++++
> > > >    include/linux/dma-fence-user-fence.h   | 31 +++++++++++
> > > >    3 files changed, 105 insertions(+), 1 deletion(-)
> > > >    create mode 100644 drivers/dma-buf/dma-fence-user-fence.c
> > > >    create mode 100644 include/linux/dma-fence-user-fence.h
> > > > 
> > > > diff --git a/drivers/dma-buf/Makefile b/drivers/dma-buf/Makefile
> > > > index c25500bb38b5..ba9ba339319e 100644
> > > > --- a/drivers/dma-buf/Makefile
> > > > +++ b/drivers/dma-buf/Makefile
> > > > @@ -1,6 +1,6 @@
> > > >    # SPDX-License-Identifier: GPL-2.0-only
> > > >    obj-y := dma-buf.o dma-fence.o dma-fence-array.o dma-fence-chain.o \
> > > > -	 dma-fence-preempt.o dma-fence-unwrap.o dma-resv.o
> > > > +	 dma-fence-preempt.o dma-fence-unwrap.o dma-fence-user-fence.o dma-resv.o
> > > >    obj-$(CONFIG_DMABUF_HEAPS)	+= dma-heap.o
> > > >    obj-$(CONFIG_DMABUF_HEAPS)	+= heaps/
> > > >    obj-$(CONFIG_SYNC_FILE)		+= sync_file.o
> > > > diff --git a/drivers/dma-buf/dma-fence-user-fence.c b/drivers/dma-buf/dma-fence-user-fence.c
> > > > new file mode 100644
> > > > index 000000000000..5a4b289bacb8
> > > > --- /dev/null
> > > > +++ b/drivers/dma-buf/dma-fence-user-fence.c
> > > > @@ -0,0 +1,73 @@
> > > > +// SPDX-License-Identifier: MIT
> > > > +/*
> > > > + * Copyright © 2024 Intel Corporation
> > > > + */
> > > > +
> > > > +#include <linux/dma-fence-user-fence.h>
> > > > +#include <linux/slab.h>
> > > > +
> > > > +static void user_fence_cb(struct dma_fence *fence, struct dma_fence_cb *cb)
> > > > +{
> > > > +	struct dma_fence_user_fence *user_fence =
> > > > +		container_of(cb, struct dma_fence_user_fence, cb);
> > > > +
> > > > +	if (user_fence->map.is_iomem)
> > > > +		writeq(user_fence->seqno, user_fence->map.vaddr_iomem);
> > > > +	else
> > > > +		*(u64 *)user_fence->map.vaddr = user_fence->seqno;
> > > > +
> > > > +	dma_fence_user_fence_free(user_fence);
> > > > +}
> > > > +
> > > > +/**
> > > > + * dma_fence_user_fence_alloc() - Allocate user fence
> > > > + *
> > > > + * Return: Allocated struct dma_fence_user_fence on Success, NULL on failure
> > > > + */
> > > > +struct dma_fence_user_fence *dma_fence_user_fence_alloc(void)
> > > > +{
> > > > +	return kmalloc(sizeof(struct dma_fence_user_fence), GFP_KERNEL);
> > > > +}
> > > > +EXPORT_SYMBOL(dma_fence_user_fence_alloc);
> > > > +
> > > > +/**
> > > > + * dma_fence_user_fence_free() - Free user fence
> > > > + *
> > > > + * Free user fence. Should only be called on a user fence if
> > > > + * dma_fence_user_fence_attach is not called to cleanup original allocation from
> > > > + * dma_fence_user_fence_alloc.
> > > > + */
> > > > +void dma_fence_user_fence_free(struct dma_fence_user_fence *user_fence)
> > > > +{
> > > > +	kfree(user_fence);
> > > We need to give that child a different name, e.g. something like
> > > dma_fence_seq_write or something like that.
> > > 
> > Yea, I didn't like this name either. dma_fence_seq_write seems better.
> > 
> > > I was just about to complain that all dma_fence implementations need to be
> > > RCU save and only then saw that this isn't a dma_fence implementation.
> > > 
> > Nope, just a helper to back a value which user space can find when a
> > dma-fence signals.
> > 
> > > Question: Why is that useful in the first place? At least AMD HW can write
> > > to basically all memory locations and even registers when a fence finishes?
> > > 
> > This could be used in a few places.
> > 
> > 1. VM bind completion a seqno is written which user jobs can then wait
> > on via ring instructions. We have something similar to this is Xe for LR
> > VMs already but I don't really like that interface - it is user address
> > + write value. This would be based on a BO + offset which I think makes
> > a bit more sense and should perform quite a better too. I haven't wired
> > this up in this series but plan to doing this.
> > 
> > 2. Convert an input dma-fence into seqno writeback when the dma-fence
> > signals. Again this seqno is something the user can wiat on via ring
> > instructions.
> > 
> > The flow here would be, a user job needs to wait on external dma-fence
> > in a syncobj, syncfile, etc..., call the convert dma-fence to user fence
> > IOCTL before the submission (patch 22, 28 in this series), program the
> > wait via ring instructions, and then do the user submission. This would
> > avoid blocking on external dma-fences in the submission path.
> > 
> > I think this makes sense and having a light weight helper to normalize
> > this flow across drivers makes a bit sense too.
> 
> Well we have pretty much the same concept, but all writes are done by the
> hardware and not go by a round-trip through the CPU.
> 

Hmm, I'm curious how that works on your end. Doesn't the DMA fence
signaling have to go through the kernel?

Yes, of course, in Xe we program seqno writes through the GPU when we
can, but our bind code currently opportunistically bypasses the GPU.
Eventually, I think it will become a 100% CPU operation for various
reasons. Likewise, if a fence is coming from an external process, there
is no GPU job to write the seqno. Of course, we could issue a GPU job to
write the seqno, but this would add latency. In the case of VM bind, we
really want to completely decouple that from the GPU for various reasons
(I can explain why if needed, but it's kind of off-topic).

> We have a read only mapped seq64 area in the kernel reserved part of the VM
> address space.
> 
> Through this area the queues can see each others fence progress and we can
> say things like BO mapping and TLB flush are finished when this seq64
> increases please suspend further processing until you see that.
> 
> Could be that this is useful for more than XE, but at least for AMD I
> currently don't see that.
> 

Ok, we have no other current users, and if you feel it is better to
carry this in Xe in a way that it can be moved to the common layer
later, there’s no issue with that. We have several other components like
this in Xe that are generic but currently live in Xe.

Matt

> Regards,
> Christian.
> 
> > 
> > Matt
> > 
> > > Regards,
> > > Christian.
> > > 
> > > > +}
> > > > +EXPORT_SYMBOL(dma_fence_user_fence_free);
> > > > +
> > > > +/**
> > > > + * dma_fence_user_fence_attach() - Attach user fence to dma-fence
> > > > + *
> > > > + * @fence: fence
> > > > + * @user_fence user fence
> > > > + * @map: IOSYS map to write seqno to
> > > > + * @seqno: seqno to write to IOSYS map
> > > > + *
> > > > + * Attach a user fence, which is a seqno write to an IOSYS map, to a DMA fence.
> > > > + * The caller must guarantee that the memory in the IOSYS map doesn't move
> > > > + * before the fence signals. This is typically done by installing the DMA fence
> > > > + * into the BO's DMA reservation bookkeeping slot from which the IOSYS was
> > > > + * derived.
> > > > + */
> > > > +void dma_fence_user_fence_attach(struct dma_fence *fence,
> > > > +				 struct dma_fence_user_fence *user_fence,
> > > > +				 struct iosys_map *map, u64 seqno)
> > > > +{
> > > > +	int err;
> > > > +
> > > > +	user_fence->map = *map;
> > > > +	user_fence->seqno = seqno;
> > > > +
> > > > +	err = dma_fence_add_callback(fence, &user_fence->cb, user_fence_cb);
> > > > +	if (err == -ENOENT)
> > > > +		user_fence_cb(NULL, &user_fence->cb);
> > > > +}
> > > > +EXPORT_SYMBOL(dma_fence_user_fence_attach);
> > > > diff --git a/include/linux/dma-fence-user-fence.h b/include/linux/dma-fence-user-fence.h
> > > > new file mode 100644
> > > > index 000000000000..8678129c7d56
> > > > --- /dev/null
> > > > +++ b/include/linux/dma-fence-user-fence.h
> > > > @@ -0,0 +1,31 @@
> > > > +/* SPDX-License-Identifier: MIT */
> > > > +/*
> > > > + * Copyright © 2024 Intel Corporation
> > > > + */
> > > > +
> > > > +#ifndef __LINUX_DMA_FENCE_USER_FENCE_H
> > > > +#define __LINUX_DMA_FENCE_USER_FENCE_H
> > > > +
> > > > +#include <linux/dma-fence.h>
> > > > +#include <linux/iosys-map.h>
> > > > +
> > > > +/** struct dma_fence_user_fence - User fence */
> > > > +struct dma_fence_user_fence {
> > > > +	/** @cb: dma-fence callback used to attach user fence to dma-fence */
> > > > +	struct dma_fence_cb cb;
> > > > +	/** @map: IOSYS map to write seqno to */
> > > > +	struct iosys_map map;
> > > > +	/** @seqno: seqno to write to IOSYS map */
> > > > +	u64 seqno;
> > > > +};
> > > > +
> > > > +struct dma_fence_user_fence *dma_fence_user_fence_alloc(void);
> > > > +
> > > > +void dma_fence_user_fence_free(struct dma_fence_user_fence *user_fence);
> > > > +
> > > > +void dma_fence_user_fence_attach(struct dma_fence *fence,
> > > > +				 struct dma_fence_user_fence *user_fence,
> > > > +				 struct iosys_map *map,
> > > > +				 u64 seqno);
> > > > +
> > > > +#endif
> 



[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux