Hi, Christian
Thanks for the reply.
On 12/10/20 11:53 AM, Christian König wrote:
Am 09.12.20 um 17:46 schrieb Thomas Hellström (Intel):
On 12/9/20 5:37 PM, Jason Gunthorpe wrote:
On Wed, Dec 09, 2020 at 05:36:16PM +0100, Thomas Hellström (Intel)
wrote:
Jason, Christian
In most implementations of the callback mentioned in the subject
there's a
fence wait.
What exactly is it needed for?
Invalidate must stop DMA before returning, so presumably drivers using
a dma fence are relying on a dma fence mechanism to stop DMA.
Yes, so far I follow, but what's the reason drivers need to stop DMA?
Well in general an invalidation means that the specified part of the
page tables are updated, either with new addresses or new access flags.
In both cases you need to stop the DMA because you could otherwise
work with stale data, e.g. read/write with the wrong addresses or
write to a read only region etc...
Yes. That's clear. I'm just trying to understand the complete
implications of doing that.
Is it for invlidation before breaking COW after fork or something
related?
This is just one of many use cases which could invalidate a range. But
there are many more, both from the kernel as well as userspace.
Just imaging that userspace first mmaps() some anonymous memory r/w,
starts a DMA to it and while the DMA is ongoing does a readonly mmap()
of libc to the same location.
My understanding of this particular case is that hardware would continue
to DMA to orphaned pages that are pinned until the driver is done with
DMA, unless hardware would somehow in-flight pick up the new PTE
addresses pointing to libc but not the protection?
Thanks,
Thomas
Since most hardware doesn't have recoverable page faults guess what
would happen if we don't wait for the DMA to finish? That would be a
security hole you can push an elephant through :)
Cheers,
Christian.
Thanks,
Thomas
Jason
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/dri-devel