在 2024/8/10 10:39, Kasireddy, Vivek 写道:
[Some people who received this message don't often get email from vivek.kasireddy@xxxxxxxxx. Learn why this is important at https://aka.ms/LearnAboutSenderIdentification ]
Hi Huan,
Currently vmap_udmabuf set page's array by each folio.
But, ubuf->folios is only contain's the folio's head page.
That mean we repeatedly mapped the folio head page to the vmalloc area.
This patch fix it, set each folio's page correct, so that pages array
contains right page, and then map into vmalloc area
Signed-off-by: Huan Yang <link@xxxxxxxx>
---
drivers/dma-buf/udmabuf.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/dma-buf/udmabuf.c b/drivers/dma-buf/udmabuf.c
index af2391cea0bf..9737f063b6b3 100644
--- a/drivers/dma-buf/udmabuf.c
+++ b/drivers/dma-buf/udmabuf.c
@@ -78,7 +78,8 @@ static int vmap_udmabuf(struct dma_buf *buf, struct
iosys_map *map)
return -ENOMEM;
for (pg = 0; pg < ubuf->pagecount; pg++)
- pages[pg] = &ubuf->folios[pg]->page;
+ pages[pg] = folio_page(ubuf->folios[pg],
+ ubuf->offsets[pg] >> PAGE_SHIFT);
I believe the correct way to address this issue is to introduce a folio variant
of vm_map_ram() and use that instead, along with the offsets info.
However, for the time being, I think we can reject vmap of hugetlb folios
by checking for non-zero offset values.
Do you mean, we do not want to vmap hugetlb folios? So by check this is
reasonable.
BTW, I found that recently shmem has started to support mTHP. (Even if
need enable a switch)
If this, not only hugetlb contains large folio, so ignore offset may not
too good?(I am not entirely sure whether mTHP can be used with shmem for
memfd.)
Thanks.
Thanks,
Vivek
vaddr = vm_map_ram(pages, ubuf->pagecount, -1);
kvfree(pages);
--
2.45.2