On Tue, Jun 11, 2024 at 06:26:23PM +0000, Zeng, Oak wrote: > Thank you Leon. That is helpful. > > I also have another very naïve question. I don't understand what is the iova address. I previously thought the iova address space is the same as the dma_address space when iommu is involved. I thought the dma_alloc_iova would allocate some contiguous iova address range and later dma_link_range function would link a physical page to the iova address and return the iova address. In other words, I thought the dma_address is iova address, and the iommu page table translate a dma_address or iova address to the physical address. This is right understanding. > > But from my print below, my above understanding is obviously wrong: the iova.dma_addr is 0 and the dma_address returned from dma_link_range is none zero... Can you help me what is iova address? Is iova address iommu related? Since dma_link_range returns a non-iova address, does this function allocate the dma-address itself? Is dma-address correlated with iova address? This is a combination of two things: 1. Need to support HMM specific logic 2. Outcome of v0 version where I implemented dma_link_range() to perform fallback to DMA direct mode. See patch 2 and 3. https://lore.kernel.org/all/54a3554639bfb963c9919c5d7c1f449021bebdb3.1709635535.git.leon@xxxxxxxxxx/ https://lore.kernel.org/all/f1049f0fc280288ae2f0c1e02388cde91b0f7876.1709635535.git.leon@xxxxxxxxxx/ So dma-iova == 0 means that you are working in direct mode and not with IOMMU, e.g. you can translate from physical address to DMA address by simple call to phys_to_dma(). One of the comments was that it is not desired behaviour and I need to create separate functions that will be in use only when IOMMU is used. See the difference between v0 and v1 for dma_link_range() function. v0: https://lore.kernel.org/all/f1049f0fc280288ae2f0c1e02388cde91b0f7876.1709635535.git.leon@xxxxxxxxxx/ v1: https://git.kernel.org/pub/scm/linux/kernel/git/leon/linux-rdma.git/commit/?h=dma-split-v1&id=5aa29f2620ef86ac58c17a0297929a0b9e8d7790 And HMM variant of dma_link_range() function, which saves from you the need to copy/paste same HMM logic from RDMA to DRM. https://git.kernel.org/pub/scm/linux/kernel/git/leon/linux-rdma.git/commit/?h=dma-split-v1&id=4d8d8d4fbe7891b1412f03ecaff88bc492e2e4eb Thanks > > Oak > > > -----Original Message----- > > From: Leon Romanovsky <leon@xxxxxxxxxx> > > Sent: Tuesday, June 11, 2024 11:45 AM > > To: Zeng, Oak <oak.zeng@xxxxxxxxx> > > Cc: Jason Gunthorpe <jgg@xxxxxxxx>; Christoph Hellwig <hch@xxxxxx>; Robin > > Murphy <robin.murphy@xxxxxxx>; Marek Szyprowski > > <m.szyprowski@xxxxxxxxxxx>; Joerg Roedel <joro@xxxxxxxxxx>; Will > > Deacon <will@xxxxxxxxxx>; Chaitanya Kulkarni <chaitanyak@xxxxxxxxxx>; > > Brost, Matthew <matthew.brost@xxxxxxxxx>; Hellstrom, Thomas > > <thomas.hellstrom@xxxxxxxxx>; Jonathan Corbet <corbet@xxxxxxx>; Jens > > Axboe <axboe@xxxxxxxxx>; Keith Busch <kbusch@xxxxxxxxxx>; Sagi > > Grimberg <sagi@xxxxxxxxxxx>; Yishai Hadas <yishaih@xxxxxxxxxx>; > > Shameer Kolothum <shameerali.kolothum.thodi@xxxxxxxxxx>; Tian, Kevin > > <kevin.tian@xxxxxxxxx>; Alex Williamson <alex.williamson@xxxxxxxxxx>; > > Jérôme Glisse <jglisse@xxxxxxxxxx>; Andrew Morton <akpm@linux- > > foundation.org>; linux-doc@xxxxxxxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx; > > linux-block@xxxxxxxxxxxxxxx; linux-rdma@xxxxxxxxxxxxxxx; > > iommu@xxxxxxxxxxxxxxx; linux-nvme@xxxxxxxxxxxxxxxxxxx; > > kvm@xxxxxxxxxxxxxxx; linux-mm@xxxxxxxxx; Bart Van Assche > > <bvanassche@xxxxxxx>; Damien Le Moal > > <damien.lemoal@xxxxxxxxxxxxxxxxxx>; Amir Goldstein > > <amir73il@xxxxxxxxx>; josef@xxxxxxxxxxxxxx; Martin K. Petersen > > <martin.petersen@xxxxxxxxxx>; daniel@xxxxxxxxxxxxx; Williams, Dan J > > <dan.j.williams@xxxxxxxxx>; jack@xxxxxxxx; Zhu Yanjun > > <zyjzyj2000@xxxxxxxxx>; Bommu, Krishnaiah > > <krishnaiah.bommu@xxxxxxxxx>; Ghimiray, Himal Prasad > > <himal.prasad.ghimiray@xxxxxxxxx> > > Subject: Re: [RFC RESEND 00/16] Split IOMMU DMA mapping operation to > > two steps > > > > On Mon, Jun 10, 2024 at 09:28:04PM +0000, Zeng, Oak wrote: > > > Hi Jason, Leon, > > > > > > I was able to fix the issue from my side. Things work fine now. I got two > > questions though: > > > > > > 1) The value returned from dma_link_range function is not contiguous, see > > below print. The "linked pa" is the function return. > > > I think dma_map_sgtable API would return some contiguous dma address. > > Is the dma-map_sgtable api is more efficient regarding the iommu page table? > > i.e., try to use bigger page size, such as use 2M page size when it is possible. > > With your new API, does it also have such consideration? I vaguely > > remembered Jason mentioned such thing, but my print below doesn't look > > like so. Maybe I need to test bigger range (only 16 pages range in the test of > > below printing). Comment? > > > > My API gives you the flexibility to use any page size you want. You can > > use 2M pages instead of 4K pages. The API doesn't enforce any page size. > > > > > > > > [17584.665126] drm_svm_hmmptr_map_dma_pages iova.dma_addr = 0x0, > > linked pa = 18ef3f000 > > > [17584.665146] drm_svm_hmmptr_map_dma_pages iova.dma_addr = 0x0, > > linked pa = 190d00000 > > > [17584.665150] drm_svm_hmmptr_map_dma_pages iova.dma_addr = 0x0, > > linked pa = 190024000 > > > [17584.665153] drm_svm_hmmptr_map_dma_pages iova.dma_addr = 0x0, > > linked pa = 178e89000 > > > > > > 2) in the comment of dma_link_range function, it is said: " @dma_offset > > needs to be advanced by the caller with the size of previous page that was > > linked + DMA address returned for the previous page". > > > Is this description correct? I don't understand the part "+ DMA address > > returned for the previous page ". > > > In my codes, let's say I call this function to link 10 pages, the first > > dma_offset is 0, second is 4k, third 8k. This worked for me. I didn't add the > > previously returned dma address. > > > Maybe I need more test. But any comment? > > > > You did it perfectly right. This is the correct way to advance dma_offset. > > > > Thanks > > > > > > > > Thanks, > > > Oak > > > > > > > -----Original Message----- > > > > From: Jason Gunthorpe <jgg@xxxxxxxx> > > > > Sent: Monday, June 10, 2024 1:25 PM > > > > To: Zeng, Oak <oak.zeng@xxxxxxxxx> > > > > Cc: Leon Romanovsky <leon@xxxxxxxxxx>; Christoph Hellwig > > <hch@xxxxxx>; > > > > Robin Murphy <robin.murphy@xxxxxxx>; Marek Szyprowski > > > > <m.szyprowski@xxxxxxxxxxx>; Joerg Roedel <joro@xxxxxxxxxx>; Will > > > > Deacon <will@xxxxxxxxxx>; Chaitanya Kulkarni <chaitanyak@xxxxxxxxxx>; > > > > Brost, Matthew <matthew.brost@xxxxxxxxx>; Hellstrom, Thomas > > > > <thomas.hellstrom@xxxxxxxxx>; Jonathan Corbet <corbet@xxxxxxx>; > > Jens > > > > Axboe <axboe@xxxxxxxxx>; Keith Busch <kbusch@xxxxxxxxxx>; Sagi > > > > Grimberg <sagi@xxxxxxxxxxx>; Yishai Hadas <yishaih@xxxxxxxxxx>; > > > > Shameer Kolothum <shameerali.kolothum.thodi@xxxxxxxxxx>; Tian, > > Kevin > > > > <kevin.tian@xxxxxxxxx>; Alex Williamson <alex.williamson@xxxxxxxxxx>; > > > > Jérôme Glisse <jglisse@xxxxxxxxxx>; Andrew Morton <akpm@linux- > > > > foundation.org>; linux-doc@xxxxxxxxxxxxxxx; linux- > > kernel@xxxxxxxxxxxxxxx; > > > > linux-block@xxxxxxxxxxxxxxx; linux-rdma@xxxxxxxxxxxxxxx; > > > > iommu@xxxxxxxxxxxxxxx; linux-nvme@xxxxxxxxxxxxxxxxxxx; > > > > kvm@xxxxxxxxxxxxxxx; linux-mm@xxxxxxxxx; Bart Van Assche > > > > <bvanassche@xxxxxxx>; Damien Le Moal > > > > <damien.lemoal@xxxxxxxxxxxxxxxxxx>; Amir Goldstein > > > > <amir73il@xxxxxxxxx>; josef@xxxxxxxxxxxxxx; Martin K. Petersen > > > > <martin.petersen@xxxxxxxxxx>; daniel@xxxxxxxxxxxxx; Williams, Dan J > > > > <dan.j.williams@xxxxxxxxx>; jack@xxxxxxxx; Zhu Yanjun > > > > <zyjzyj2000@xxxxxxxxx>; Bommu, Krishnaiah > > > > <krishnaiah.bommu@xxxxxxxxx>; Ghimiray, Himal Prasad > > > > <himal.prasad.ghimiray@xxxxxxxxx> > > > > Subject: Re: [RFC RESEND 00/16] Split IOMMU DMA mapping operation to > > > > two steps > > > > > > > > On Mon, Jun 10, 2024 at 04:40:19PM +0000, Zeng, Oak wrote: > > > > > Thanks Leon and Yanjun for the reply! > > > > > > > > > > Based on the reply, we will continue use the current version for > > > > > test (as it is tested for vfio and rdma). We will switch to v1 once > > > > > it is fully tested/reviewed. > > > > > > > > I'm glad you are finding it useful, one of my interests with this work > > > > is to improve all the HMM users. > > > > > > > > Jason >