Hi Huan, > > Currently vmap_udmabuf set page's array by each folio. > But, ubuf->folios is only contain's the folio's head page. > > That mean we repeatedly mapped the folio head page to the vmalloc area. > > This patch fix it, set each folio's page correct, so that pages array > contains right page, and then map into vmalloc area > > Signed-off-by: Huan Yang <link@xxxxxxxx> > --- > drivers/dma-buf/udmabuf.c | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > diff --git a/drivers/dma-buf/udmabuf.c b/drivers/dma-buf/udmabuf.c > index af2391cea0bf..9737f063b6b3 100644 > --- a/drivers/dma-buf/udmabuf.c > +++ b/drivers/dma-buf/udmabuf.c > @@ -78,7 +78,8 @@ static int vmap_udmabuf(struct dma_buf *buf, struct > iosys_map *map) > return -ENOMEM; > > for (pg = 0; pg < ubuf->pagecount; pg++) > - pages[pg] = &ubuf->folios[pg]->page; > + pages[pg] = folio_page(ubuf->folios[pg], > + ubuf->offsets[pg] >> PAGE_SHIFT); I believe the correct way to address this issue is to introduce a folio variant of vm_map_ram() and use that instead, along with the offsets info. However, for the time being, I think we can reject vmap of hugetlb folios by checking for non-zero offset values. Thanks, Vivek > > vaddr = vm_map_ram(pages, ubuf->pagecount, -1); > kvfree(pages); > -- > 2.45.2