[LSF/MM TOPIC] get_user_pages() for PCI BAR Memory

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Many systems can now support direct DMA between two PCI devices, for
instance between a RDMA NIC and a NVMe CMB, or a RDMA NIC and GPU
graphics memory. In many system architectures this peer-to-peer PCI-E
DMA transfer is critical to achieving performance as there is simply
not enough system memory/PCI-E bandwidth for data traffic to go
through the CPU socket.

For many years various out of tree solutions have existed to serve
this need. Recently some components have been accpeted into mainline,
such as the p2pdma system, which allows co-operating drivers to setup
P2P DMA transfers at the PCI level. This has allowed some kernel P2P
DMA transfers related to NVMe CMB and RDMA to become supported.

A major next step is to enable P2P transfers under userspace
control. This is a very broad topic, but for this session I propose to
focus on initial cases of supporting drivers can setup a P2P transfer
from a PCI BAR page mmap'd to userspace. This is the basic starting
point for future discussions on how to adapt get_user_pages() IO paths
(ie O_DIRECT, net zero copy TX, RDMA, etc) to support PCI BAR memory.

As all current drivers doing DMA from user space must go through
get_user_pages() (or its new sibling hmm_range_fault()), some
extension of the get_user_pages() API is needed to allow drivers
supporting P2P to see the pages.

get_user_pages() will require some 'struct page' and 'struct
vm_area_struct' representation of the BAR memory beyond what today's
io_remap_pfn_range()/etc produces.

This topic has been discussed in small groups in various conferences
over the last year, (plumbers, ALPSS, LSF/MM 2019, etc). Having a
larger group together would be productive, especially as the direction
has a notable impact on the general mm.

For patch sets, we've seen a number of attempts so far, but little has
been merged yet. Common elements of past discussions have been:
 - Building struct page for BAR memory
 - Stuffing BAR memory into scatter/gather lists, bios and skbs
 - DMA mapping BAR memory
 - Referencing BAR memory without a struct page
 - Managing lifetime of BAR memory across multiple drivers

Based on past work, the people in the CC list would be recommended
participants:

 Christian König <christian.koenig@xxxxxxx>
 Daniel Vetter <daniel.vetter@xxxxxxxx>
 Logan Gunthorpe <logang@xxxxxxxxxxxx>
 Stephen Bates <sbates@xxxxxxxxxxxx>
 Jérôme Glisse <jglisse@xxxxxxxxxx>
 Ira Weiny <iweiny@xxxxxxxxx>
 Christoph Hellwig <hch@xxxxxx>
 John Hubbard <jhubbard@xxxxxxxxxx>
 Ralph Campbell <rcampbell@xxxxxxxxxx>
 Dan Williams <dan.j.williams@xxxxxxxxx>
 Don Dutile <ddutile@xxxxxxxxxx>

Regards,
Jason

Description of the p2pdma work:
 https://lwn.net/Articles/767281/

Discussion slot at Plumbers:
 https://linuxplumbersconf.org/event/4/contributions/369/

DRM work on DMABUF as a user facing object for P2P:
 https://www.spinics.net/lists/amd-gfx/msg32469.html





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux