On 30.08.24 22:28, Barry Song wrote:
From: Jason Wang <jasowang@xxxxxxxxxx>
mm doesn't support non-blockable __GFP_NOFAIL allocation. Because
persisting in providing __GFP_NOFAIL services for non-block users
who cannot perform direct memory reclaim may only result in an
endless busy loop.
Therefore, in such cases, the current mm-core may directly return
a NULL pointer:
static inline struct page *
__alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
struct alloc_context *ac)
{
...
if (gfp_mask & __GFP_NOFAIL) {
/*
* All existing users of the __GFP_NOFAIL are blockable, so warn
* of any new users that actually require GFP_NOWAIT
*/
if (WARN_ON_ONCE_GFP(!can_direct_reclaim, gfp_mask))
goto fail;
...
}
...
fail:
warn_alloc(gfp_mask, ac->nodemask,
"page allocation failure: order:%u", order);
got_pg:
return page;
}
Unfortuantely, vpda does that nofail allocation under non-sleepable
lock. A possible way to fix that is to move the pages allocation out
of the lock into the caller, but having to allocate a huge number of
pages and auxiliary page array seems to be problematic as well per
Tetsuon: " You should implement proper error handling instead of
using __GFP_NOFAIL if count can become large."
So I choose another way, which does not release kernel bounce pages
when user tries to register userspace bounce pages. Then we can
avoid allocating in paths where failure is not expected.(e.g in
the release). We pay this for more memory usage as we don't release
kernel bounce pages but further optimizations could be done on top.
Fixes: 6c77ed22880d ("vduse: Support using userspace pages as bounce buffer")
Reviewed-by: Xie Yongji <xieyongji@xxxxxxxxxxxxx>
Tested-by: Xie Yongji <xieyongji@xxxxxxxxxxxxx>
Signed-off-by: Jason Wang <jasowang@xxxxxxxxxx>
[v-songbaohua@xxxxxxxx: Refine the changelog]
Signed-off-by: Barry Song <v-songbaohua@xxxxxxxx>
---
drivers/vdpa/vdpa_user/iova_domain.c | 19 +++++++++++--------
drivers/vdpa/vdpa_user/iova_domain.h | 1 +
2 files changed, 12 insertions(+), 8 deletions(-)
diff --git a/drivers/vdpa/vdpa_user/iova_domain.c b/drivers/vdpa/vdpa_user/iova_domain.c
index 791d38d6284c..58116f89d8da 100644
--- a/drivers/vdpa/vdpa_user/iova_domain.c
+++ b/drivers/vdpa/vdpa_user/iova_domain.c
@@ -162,6 +162,7 @@ static void vduse_domain_bounce(struct vduse_iova_domain *domain,
enum dma_data_direction dir)
{
struct vduse_bounce_map *map;
+ struct page *page;
unsigned int offset;
void *addr;
size_t sz;
@@ -178,7 +179,10 @@ static void vduse_domain_bounce(struct vduse_iova_domain *domain,
map->orig_phys == INVALID_PHYS_ADDR))
return;
- addr = kmap_local_page(map->bounce_page);
+ page = domain->user_bounce_pages ?
+ map->user_bounce_page : map->bounce_page;
+
+ addr = kmap_local_page(page);
do_bounce(map->orig_phys + offset, addr + offset, sz, dir);
kunmap_local(addr);
size -= sz;
@@ -270,9 +274,8 @@ int vduse_domain_add_user_bounce_pages(struct vduse_iova_domain *domain,
memcpy_to_page(pages[i], 0,
page_address(map->bounce_page),
PAGE_SIZE);
- __free_page(map->bounce_page);
}
- map->bounce_page = pages[i];
+ map->user_bounce_page = pages[i];
get_page(pages[i]);
}
domain->user_bounce_pages = true;
@@ -297,17 +300,17 @@ void vduse_domain_remove_user_bounce_pages(struct vduse_iova_domain *domain)
struct page *page = NULL;
map = &domain->bounce_maps[i];
- if (WARN_ON(!map->bounce_page))
+ if (WARN_ON(!map->user_bounce_page))
continue;
/* Copy user page to kernel page if it's in use */
if (map->orig_phys != INVALID_PHYS_ADDR) {
- page = alloc_page(GFP_ATOMIC | __GFP_NOFAIL);
+ page = map->bounce_page;
Why don't we need a kmap_local_page(map->bounce_page) here, but we might
perform one / have performed one in vduse_domain_bounce?
Maybe we should simply use
memcpy_page(map->bounce_page, 0, map->user_bounce_page, 0, PAGE_SIZE)
?
--
Cheers,
David / dhildenb