Re: [LSF/MM TOPIC] get_user_pages() for PCI BAR Memory

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 07.02.20 um 19:24 schrieb Jason Gunthorpe:
Many systems can now support direct DMA between two PCI devices, for
instance between a RDMA NIC and a NVMe CMB, or a RDMA NIC and GPU
graphics memory. In many system architectures this peer-to-peer PCI-E
DMA transfer is critical to achieving performance as there is simply
not enough system memory/PCI-E bandwidth for data traffic to go
through the CPU socket.

For many years various out of tree solutions have existed to serve
this need. Recently some components have been accpeted into mainline,
such as the p2pdma system, which allows co-operating drivers to setup
P2P DMA transfers at the PCI level. This has allowed some kernel P2P
DMA transfers related to NVMe CMB and RDMA to become supported.

A major next step is to enable P2P transfers under userspace
control. This is a very broad topic, but for this session I propose to
focus on initial cases of supporting drivers can setup a P2P transfer
from a PCI BAR page mmap'd to userspace. This is the basic starting
point for future discussions on how to adapt get_user_pages() IO paths
(ie O_DIRECT, net zero copy TX, RDMA, etc) to support PCI BAR memory.

As all current drivers doing DMA from user space must go through
get_user_pages() (or its new sibling hmm_range_fault()), some
extension of the get_user_pages() API is needed to allow drivers
supporting P2P to see the pages.

get_user_pages() will require some 'struct page' and 'struct
vm_area_struct' representation of the BAR memory beyond what today's
io_remap_pfn_range()/etc produces.

This topic has been discussed in small groups in various conferences
over the last year, (plumbers, ALPSS, LSF/MM 2019, etc). Having a
larger group together would be productive, especially as the direction
has a notable impact on the general mm.

For patch sets, we've seen a number of attempts so far, but little has
been merged yet. Common elements of past discussions have been:
  - Building struct page for BAR memory
  - Stuffing BAR memory into scatter/gather lists, bios and skbs
  - DMA mapping BAR memory
  - Referencing BAR memory without a struct page
  - Managing lifetime of BAR memory across multiple drivers

I can only repeat Jérôme that this most likely will never work correctly with get_user_pages().

One of the main issues is that if you want to cover all use cases you also need to take into account P2P operations which are hidden from the CPU.

E.g. you have memory which is not even CPU addressable, but can be shared between GPUs using XGMI, NVLink, SLI etc....

Since you can't get a struct page for something the CPU can't even have an address for the whole idea of using get_user_pages() fails from the very beginning.

That's also the reason why for GPUs we opted to use DMA-buf based sharing of buffers between drivers instead.

So we need to figure out how express DMA addresses outside of the CPU address space first before we can even think about something like extending get_user_pages() for P2P in an HMM scenario.

Regards,
Christian.


Based on past work, the people in the CC list would be recommended
participants:

  Christian König <christian.koenig@xxxxxxx>
  Daniel Vetter <daniel.vetter@xxxxxxxx>
  Logan Gunthorpe <logang@xxxxxxxxxxxx>
  Stephen Bates <sbates@xxxxxxxxxxxx>
  Jérôme Glisse <jglisse@xxxxxxxxxx>
  Ira Weiny <iweiny@xxxxxxxxx>
  Christoph Hellwig <hch@xxxxxx>
  John Hubbard <jhubbard@xxxxxxxxxx>
  Ralph Campbell <rcampbell@xxxxxxxxxx>
  Dan Williams <dan.j.williams@xxxxxxxxx>
  Don Dutile <ddutile@xxxxxxxxxx>

Regards,
Jason

Description of the p2pdma work:
  https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flwn.net%2FArticles%2F767281%2F&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C942df05e20d14566df3708d7abfb0dbb%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637166967083315894&amp;sdata=j5YBrBF2zIjn0oZwbBn5%2BYabv8uWaawwtkVIWnO2GPs%3D&amp;reserved=0

Discussion slot at Plumbers:
  https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flinuxplumbersconf.org%2Fevent%2F4%2Fcontributions%2F369%2F&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C942df05e20d14566df3708d7abfb0dbb%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637166967083325894&amp;sdata=TbXLNXBDExHiViEE%2FYRpavsJ%2Fd68KOfg8xp%2BKk1ZJJU%3D&amp;reserved=0

DRM work on DMABUF as a user facing object for P2P:
  https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.spinics.net%2Flists%2Famd-gfx%2Fmsg32469.html&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C942df05e20d14566df3708d7abfb0dbb%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637166967083325894&amp;sdata=LBVbNR5bsknqL4MQf9RUyh7TDD9nD6yR5KJvKx5STds%3D&amp;reserved=0




[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux