RE: [RFC RESEND 00/16] Split IOMMU DMA mapping operation to two steps

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> -----Original Message-----
> From: Jason Gunthorpe <jgg@xxxxxxxx>
> Sent: Friday, May 3, 2024 12:43 PM
> To: Zeng, Oak <oak.zeng@xxxxxxxxx>
> Cc: leon@xxxxxxxxxx; Christoph Hellwig <hch@xxxxxx>; Robin Murphy
> <robin.murphy@xxxxxxx>; Marek Szyprowski
> <m.szyprowski@xxxxxxxxxxx>; Joerg Roedel <joro@xxxxxxxxxx>; Will
> Deacon <will@xxxxxxxxxx>; Chaitanya Kulkarni <chaitanyak@xxxxxxxxxx>;
> Brost, Matthew <matthew.brost@xxxxxxxxx>; Hellstrom, Thomas
> <thomas.hellstrom@xxxxxxxxx>; Jonathan Corbet <corbet@xxxxxxx>; Jens
> Axboe <axboe@xxxxxxxxx>; Keith Busch <kbusch@xxxxxxxxxx>; Sagi
> Grimberg <sagi@xxxxxxxxxxx>; Yishai Hadas <yishaih@xxxxxxxxxx>;
> Shameer Kolothum <shameerali.kolothum.thodi@xxxxxxxxxx>; Tian, Kevin
> <kevin.tian@xxxxxxxxx>; Alex Williamson <alex.williamson@xxxxxxxxxx>;
> Jérôme Glisse <jglisse@xxxxxxxxxx>; Andrew Morton <akpm@linux-
> foundation.org>; linux-doc@xxxxxxxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx;
> linux-block@xxxxxxxxxxxxxxx; linux-rdma@xxxxxxxxxxxxxxx;
> iommu@xxxxxxxxxxxxxxx; linux-nvme@xxxxxxxxxxxxxxxxxxx;
> kvm@xxxxxxxxxxxxxxx; linux-mm@xxxxxxxxx; Bart Van Assche
> <bvanassche@xxxxxxx>; Damien Le Moal
> <damien.lemoal@xxxxxxxxxxxxxxxxxx>; Amir Goldstein
> <amir73il@xxxxxxxxx>; josef@xxxxxxxxxxxxxx; Martin K. Petersen
> <martin.petersen@xxxxxxxxxx>; daniel@xxxxxxxxxxxxx; Williams, Dan J
> <dan.j.williams@xxxxxxxxx>; jack@xxxxxxxx; Leon Romanovsky
> <leonro@xxxxxxxxxx>; Zhu Yanjun <zyjzyj2000@xxxxxxxxx>
> Subject: Re: [RFC RESEND 00/16] Split IOMMU DMA mapping operation to
> two steps
> 
> On Thu, May 02, 2024 at 11:32:55PM +0000, Zeng, Oak wrote:
> 
> > > Instead of teaching DMA to know these specific datatypes, let's separate
> > > existing DMA mapping routine to two steps and give an option to
> advanced
> > > callers (subsystems) perform all calculations internally in advance and
> > > map pages later when it is needed.
> >
> > I looked into how this scheme can be applied to DRM subsystem and GPU
> drivers.
> >
> > I figured RDMA can apply this scheme because RDMA can calculate the
> > iova size. Per my limited knowledge of rdma, user can register a
> > memory region (the reg_user_mr vfunc) and memory region's sized is
> > used to pre-allocate iova space. And in the RDMA use case, it seems
> > the user registered region can be very big, e.g., 512MiB or even GiB
> 
> In RDMA the iova would be linked to the SVA granual we discussed
> previously.

I need to learn more of this scheme. 

Let's say 512MiB granual... In a 57-bit virtual address machine, the use space can address space can be up to 56 bit (e.g.,  half-half split b/t kernel and user)

So you would end up with  134,217, 728 sub-regions (2 to the power of 27), which is huge...

Is that RDMA use a much smaller virtual address space?

With 512MiB granual, do you fault-in or map 512MiB virtual address range to RDMA page table? E.g., when page fault happens at address A, do you fault-in the whole 512MiB region to RDMA page table? How do you make sure all addresses in this 512MiB region are valid virtual address?  



> 
> > In GPU driver, we have a few use cases where we need dma-mapping. Just
> name two:
> >
> > 1) userptr: it is user malloc'ed/mmap'ed memory and registers to gpu
> > (in Intel's driver it is through a vm_bind api, similar to mmap). A
> > userptr can be of any random size, depending on user malloc
> > size. Today we use dma-map-sg for this use case. The down side of
> > our approach is, during userptr invalidation, even if user only
> > munmap partially of an userptr, we invalidate the whole userptr from
> > gpu page table, because there is no way for us to partially
> > dma-unmap the whole sg list. I think we can try your new API in this
> > case. The main benefit of the new approach is the partial munmap
> > case.
> 
> Yes, this is one of the main things it will improve.
> 
> > We will have to pre-allocate iova for each userptr, and we have many
> > userptrs of random size... So we might be not as efficient as RDMA
> > case where I assume user register a few big memory regions.
> 
> You are already doing this. dma_map_sg() does exactly the same IOVA
> allocation under the covers.

Sure. Then we can replace our _sg with your new DMA Api once it is merged. We will gain a benefit with a little more codes

> 
> > 2) system allocator: it is malloc'ed/mmap'ed memory be used for GPU
> > program directly, without any other extra driver API call. We call
> > this use case system allocator.
> 
> > For system allocator, driver have no knowledge of which virtual
> > address range is valid in advance. So when GPU access a
> > malloc'ed/mmap'ed address, we have a page fault. We then look up a
> > CPU vma which contains the fault address. I guess we can use the CPU
> > vma size to allocate the iova space of the same size?
> 
> No. You'd follow what we discussed in the other thread.
> 
> If you do a full SVA then you'd split your MM space into granuals and
> when a fault hits a granual you'd allocate the IOVA for the whole
> granual. RDMA ODP is using a 512M granual currently.

Per system allocator requirement, we have to do full SVA (which means ANY valid CPU virtual address is a valid GPU virtual address). 

Per my above calculation, with 512M granual, we will end up a huge number of sub-regions....

> 
> If you are doing sub ranges then you'd probably allocate the IOVA for
> the well defined sub range (assuming the typical use case isn't huge)

Can you explain what is sub ranges? Is that device only mirror partially of the CPU virtual address space?

How do we decide which part to mirror?


> 
> > But there will be a true difficulty to apply your scheme to this use
> > case. It is related to the STICKY flag. As I understand it, the
> > sticky flag is designed for driver to mark "this page/pfn has been
> > populated, no need to re-populate again", roughly...Unlike userptr
> > and RDMA use cases where the backing store of a buffer is always in
> > system memory, in the system allocator use case, the backing store
> > can be changing b/t system memory and GPU's device private
> > memory. Even worse, we have to assume the data migration b/t system
> > and GPU is dynamic. When data is migrated to GPU, we don't need
> > dma-map. And when migration happens to a pfn with STICKY flag, we
> > still need to repopulate this pfn. So you can see, it is not easy to
> > apply this scheme to this use case. At least I can't see an obvious
> > way.
> 
> You are already doing this today, you are keeping the sg list around
> until you unmap it.
> 
> Instead of keeping the sg list you'd keep a much smaller datastructure
> per-granual. The sticky bit is simply a convient way for ODP to manage
> the smaller data structure, you don't have to use it.
> 
> But you do need to keep track of what pages in the granual have been
> DMA mapped - sg list was doing this before. This could be a simple
> bitmap array matching the granual size.

Make sense. We can try once your API is ready. 

I still don't figure out the granular scheme. Please help with above questions.

Thanks,
Oak


> 
> Looking (far) forward we may be able to have a "replace" API that
> allows installing a new page unconditionally regardless of what is
> already there.
> 
> Jason





[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux