On 12/1/22 09:04, Bob Pearson wrote: > On 11/30/22 18:41, Jason Gunthorpe wrote: >> On Wed, Nov 30, 2022 at 06:36:56PM -0600, Bob Pearson wrote: >>> I'm not looking at my patch you responded to but the one you posted to replace maps >>> by xarrays. >> >> I see, I botched that part >> >>> The existing rxe driver assumes that if ibmr->type == IB_MR_TYPE_DMA >>> that the iova is just a kernel (virtual) address that is already >>> mapped. >> >> No, it is not correct >> >>> Maybe this is not correct but it has always worked this way. These >>> are heavily used by storage stacks (e.g. Lustre) which always use >>> DMA mr's. Since we don't actually do any DMAs we don't need to setup >>> the iommu for these and just do memcpy's without dealing with pages. >> >> You still should be doing the kmap >> >> Jason > > Something was disconnected in my memory. So I went back and looked at lustre. > Turns out it never uses IB_MR_TYPE_DMA and for that matter I can't find any > use cases in the rdma tree or online. So, the implementation in rxe has almost > certainly never been used. > > So I need to choose to 'fix' the current implementation or just delete type dma support. > I get the idea that I need to convert the iova to a page and kmap it but i'm not > clear how to do that. This 64 bit numnber (iova) needs to convert to a struct page *. > Without a use case to look at I don't know how to interpret it. Apparently it's not a > virtual address. > > Bob > I did find a single use case for the mr created during alloc_pd. The comments seem to imply that the use is just access to local kernel memory with va=pa. So I am back to my previous thoughts. Memcpy should just work. Bob