Am 08.07.20 um 11:54 schrieb Daniel Vetter:
On Wed, Jul 08, 2020 at 11:22:00AM +0200, Christian König wrote:
Am 07.07.20 um 20:35 schrieb Chris Wilson:
Quoting lepton (2020-07-07 19:17:51)
On Tue, Jul 7, 2020 at 10:20 AM Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> wrote:
Quoting lepton (2020-07-07 18:05:21)
On Tue, Jul 7, 2020 at 9:00 AM Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> wrote:
If we assign obj->filp, we believe that the create vgem bo is native and
allow direct operations like mmap() assuming it behaves as backed by a
shmemfs inode. When imported from a dmabuf, the obj->pages are
not always meaningful and the shmemfs backing store misleading.
Note, that regular mmap access to a vgem bo is via the dumb buffer API,
and that rejects attempts to mmap an imported dmabuf,
What do you mean by "regular mmap access" here? It looks like vgem is
using vgem_gem_dumb_map as .dumb_map_offset callback then it doesn't call
drm_gem_dumb_map_offset
As I too found out, and so had to correct my story telling.
By regular mmap() access I mean mmap on the vgem bo [via the dumb buffer
API] as opposed to mmap() via an exported dma-buf fd. I had to look at
igt to see how it was being used.
Now it seems your fix is to disable "regular mmap" on imported dma buf
for vgem. I am not really a graphic guy, but then the api looks like:
for a gem handle, user space has to guess to find out the way to mmap
it. If user space guess wrong, then it will fail to mmap. Is this the
expected way
for people to handle gpu buffer?
You either have a dumb buffer handle, or a dma-buf fd. If you have the
handle, you have to use the dumb buffer API, there's no other way to
mmap it. If you have the dma-buf fd, you should mmap it directly. Those
two are clear.
It's when you import the dma-buf into vgem and create a handle out of
it, that's when the handle is no longer first class and certain uAPI
[the dumb buffer API in particular] fail.
It's not brilliant, as you say, it requires the user to remember the
difference between the handles, but at the same time it does prevent
them falling into coherency traps by forcing them to use the right
driver to handle the object, and have to consider the additional ioctls
that go along with that access.
Yes, Chris is right. Mapping DMA-buf through the mmap() APIs of an importer
is illegal.
What we could maybe try to do is to redirect this mmap() API call on the
importer to the exporter, but I'm pretty sure that the fs layer wouldn't
like that without changes.
We already do that, there's a full helper-ified path from I think shmem
helpers through prime helpers to forward this all. Including handling
buffer offsets and all the other lolz back&forth.
Oh, that most likely won't work correctly with unpinned DMA-bufs and
needs to be avoided.
Each file descriptor is associated with an struct address_space. And
when you mmap() through the importer by redirecting the system call to
the exporter you end up with the wrong struct address_space in your VMA.
That in turn can go up easily in flames when the exporter tries to
invalidate the CPU mappings for its DMA-buf while moving it.
Where are we doing this? My last status was that this is forbidden.
Christian.
Of course there's still the problem that many drivers don't forward the
cache coherency calls for begin/end cpu access, so in a bunch of cases
you'll cache cacheline dirt soup. But that's kinda standard procedure for
dma-buf :-P
But yeah trying to handle the mmap as an importer, bypassing the export:
nope. The one exception is if you have some kind of fancy gart with
cpu-visible pci bar (like at least integrated intel gpus have). But in
that case the mmap very much looks&acts like device access in every way.
Cheers, Daniel
Regards,
Christian.
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx