Hi Huan, > Subject: [PATCH v5 1/7] udmabuf: pre-fault when first page fault > > The current udmabuf mmap uses a page fault to populate the vma. > > However, the current udmabuf has already obtained and pinned the folio > upon completion of the creation.This means that the physical memory has > already been acquired, rather than being accessed dynamically. > > As a result, the page fault has lost its purpose as a demanding > page. Due to the fact that page fault requires trapping into kernel mode > and filling in when accessing the corresponding virtual address in mmap, > when creating a large size udmabuf, this represents a considerable > overhead. > > This patch first the pfn into page table, and then pre-fault each pfn > into vma, when first access. Should know, if anything wrong when > pre-fault, will not report it's error, else, report when task access it > at the first time. > > Suggested-by: Vivek Kasireddy <vivek.kasireddy@xxxxxxxxx> > Signed-off-by: Huan Yang <link@xxxxxxxx> > --- > drivers/dma-buf/udmabuf.c | 35 +++++++++++++++++++++++++++++++++-- > 1 file changed, 33 insertions(+), 2 deletions(-) > > diff --git a/drivers/dma-buf/udmabuf.c b/drivers/dma-buf/udmabuf.c > index 047c3cd2ceff..0a8c231a36e1 100644 > --- a/drivers/dma-buf/udmabuf.c > +++ b/drivers/dma-buf/udmabuf.c > @@ -43,7 +43,8 @@ static vm_fault_t udmabuf_vm_fault(struct vm_fault > *vmf) > struct vm_area_struct *vma = vmf->vma; > struct udmabuf *ubuf = vma->vm_private_data; > pgoff_t pgoff = vmf->pgoff; > - unsigned long pfn; > + unsigned long addr, end, pfn; > + vm_fault_t ret; > > if (pgoff >= ubuf->pagecount) > return VM_FAULT_SIGBUS; > @@ -51,7 +52,37 @@ static vm_fault_t udmabuf_vm_fault(struct vm_fault > *vmf) > pfn = folio_pfn(ubuf->folios[pgoff]); > pfn += ubuf->offsets[pgoff] >> PAGE_SHIFT; > > - return vmf_insert_pfn(vma, vmf->address, pfn); > + ret = vmf_insert_pfn(vma, vmf->address, pfn); > + if (ret & VM_FAULT_ERROR) > + return ret; > + > + /* pre fault */ > + pgoff = vma->vm_pgoff; > + end = vma->vm_end; Nit: use vma->vm_end directly in the loop below, as end is used only once. > + addr = vma->vm_start; > + > + for (; addr < end; pgoff++, addr += PAGE_SIZE) { > + if (addr == vmf->address) > + continue; > + > + if (WARN_ON(pgoff >= ubuf->pagecount)) > + break; > + > + pfn = folio_pfn(ubuf->folios[pgoff]); > + Nit: no need for a blank line here. > + pfn += ubuf->offsets[pgoff] >> PAGE_SHIFT; > + > + /** > + * If something wrong, due to this vm fault success, > + * do not report in here, report only when true access > + * this addr. > + * So, don't update ret here, just break. Please rewrite the above comments as: * If the below vmf_insert_pfn() fails, we do not return an error here * during this pre-fault step. However, an error will be returned if the * failure occurs when the addr is truly accessed. With that, Acked-by: Vivek Kasireddy <vivek.kasireddy@xxxxxxxxx> > + */ > + if (vmf_insert_pfn(vma, addr, pfn) & VM_FAULT_ERROR) > + break; > + } > + > + return ret; > } > > static const struct vm_operations_struct udmabuf_vm_ops = { > -- > 2.45.2