Re: [Intel-gfx] [PATCH 03/25] dma-buf.rst: Document why idenfinite fences are a bad idea

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 09.07.20 um 14:31 schrieb Daniel Vetter:
On Thu, Jul 9, 2020 at 2:11 PM Daniel Stone <daniel@xxxxxxxxxxxxx> wrote:
On Thu, 9 Jul 2020 at 09:05, Daniel Vetter <daniel@xxxxxxxx> wrote:
On Thu, Jul 09, 2020 at 08:36:43AM +0100, Daniel Stone wrote:
On Tue, 7 Jul 2020 at 21:13, Daniel Vetter <daniel.vetter@xxxxxxxx> wrote:
Comes up every few years, gets somewhat tedious to discuss, let's
write this down once and for all.
Thanks for writing this up! I wonder if any of the notes from my reply
to the previous-version thread would be helpful to more explicitly
encode the carrot of dma-fence's positive guarantees, rather than just
the stick of 'don't do this'. ;) Either way, this is:
I think the carrot should go into the intro section for dma-fence, this
section here is very much just the "don't do this" part. The previous
patches have an attempt at encoding this a bit, maybe see whether there's
a place for your reply (or parts of it) to fit?
Sounds good to me.

Acked-by: Daniel Stone <daniels@xxxxxxxxxxxxx>

What I'm not sure about is whether the text should be more explicit in
flat out mandating the amdkfd eviction fences for long running compute
workloads or workloads where userspace fencing is allowed.
... or whether we just say that you can never use dma-fence in
conjunction with userptr.
Uh userptr is entirely different thing. That one is ok. It's userpsace
fences or gpu futexes or future fences or whatever we want to call them.
Or is there some other confusion here?.
I mean generating a dma_fence from a batch which will try to page in
userptr. Given that userptr could be backed by absolutely anything at
all, it doesn't seem smart to allow fences to rely on a pointer to an
mmap'ed NFS file. So it seems like batches should be mutually
exclusive between arbitrary SVM userptr and generating a dma-fence?
Locking is Tricky (tm) but essentially what at least amdgpu does is
pull in the backing storage before we publish any dma-fence. And then
some serious locking magic to make sure that doesn't race with a core
mm invalidation event. So for your case here the cs ioctl just blocks
until the nfs pages are pulled in.

Yeah, we had some iterations until all was settled.

Basic idea is the following:
1. Have a sequence counter increased whenever a change to the page tables happens.
2. During CS grab the current value of this counter.
3. Get all the pages you need in an array.
4. Prepare CS, grab the low level lock the MM notifier waits for and double check the counter. 5. If the counter is still the same all is well and the DMA-fence pushed to the hardware.
6. If the counter has changed repeat.

Can result in a nice live lock when you constantly page things in/out, but that is expected behavior.

Christian.


Once we've committed for the dma-fence it's only the other way round,
i.e. core mm will stall on the dma-fence if it wants to throw out
these pages again. More or less at least. That way we never have a
dma-fence depending upon any core mm operations. The only pain here is
that this severely limits what you can do in the critical path towards
signalling a dma-fence, because the tldr is "no interacting with core
mm at all allowed".

Speaking of entirely different things ... the virtio-gpu bit really
doesn't belong in this patch.
Oops, dunno where I lost that as a sparate patch. Will split out again :-(
-Daniel

Cheers,
Daniel



_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel



[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux