Re: [LSF/MM TOPIC] get_user_pages() for PCI BAR Memory

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 08.02.20 um 14:54 schrieb Jason Gunthorpe:
On Sat, Feb 08, 2020 at 02:10:59PM +0100, Christian König wrote:
For patch sets, we've seen a number of attempts so far, but little has
been merged yet. Common elements of past discussions have been:
   - Building struct page for BAR memory
   - Stuffing BAR memory into scatter/gather lists, bios and skbs
   - DMA mapping BAR memory
   - Referencing BAR memory without a struct page
   - Managing lifetime of BAR memory across multiple drivers
I can only repeat Jérôme that this most likely will never work correctly
with get_user_pages().
I suppose I'm using 'get_user_pages()' as something of a placeholder
here to refer to the existing family of kernel DMA consumers that call
get_user_pages to work on VMA backed process visible memory.

We have to have something like get_user_pages() because the kernel
call-sites are fundamentally only dealing with userspace VA. That is
how their uAPIs are designed, and we want to keep them working.

So, if something doesn't fit into get_user_pages(), ie because it
doesn't have a VMA in the first place, then that is some other
discussion. DMA buf seems like a pretty good answer.

Well we do have a VMA, but I strongly think that get_user_pages() is the wrong approach for the job.

What we should do instead is to grab the VMA for the addresses and then say through the vm_operations_struct: "Hello I'm driver X and want to do P2P with you. Who are you? What are your capabilities? Should we use PCIe or shortcut through some other interconnect? etc etc ect...".

E.g. you have memory which is not even CPU addressable, but can be shared
between GPUs using XGMI, NVLink, SLI etc....
For this kind of memory if it is mapped into a VMA with
DEVICE_PRIVATE, as Jerome has imagined, then it would be part of this
discussion.

I think what Jerome had in mind with its P2P ideas around HMM was that we could do this with anonymous memory which was migrated to a GPU device. That turned out to be rather complicated because you would need to be able to figure out to which driver you need to talk to for the migrated address, which in turn wasn't related to the VMA in any way.

What you have here is probably a rather different use case since the whole VMA is belonging to a driver. That makes things quite a bit easier to handle.

So we need to figure out how express DMA addresses outside of the CPU
address space first before we can even think about something like extending
get_user_pages() for P2P in an HMM scenario.
Why?

Because that's how get_user_pages() works. IIRC you call it with userspace address+length and get a filled struct pages and VMAs array in return.

When you don't have CPU addresses for you memory the whole idea of that interface falls apart. So I think we need to get away from get_user_pages() and work more high level here.

This is discussion is not exclusively for GPU. We have many use
cases that do not have CPU invisible memory to worry about, and I
don't think defining how DMA mapping works for cpu-invisible
interconnect overlaps with figuring out how to make get_user_pages
work with existing ZONE_DEVICE memory types.

ie the challenge here is how to deliver the required information to
the p2pdma subsystem so a get_user_pages() call site can do a DMA map.

Improving the p2pdma subsystem to handle more complex cases like CPU
invisible memory and interconnect is a different topic, I think :)

Well you can of course ignore those, but P2P over PCIe is actually only a rather specific use case and I would say when we start to tackle this we should come up with something that works in all areas.

Regards,
Christian.


Regards,
Jason




[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux