On Wed, Jul 31, 2024 at 6:21 PM Tetsuo Handa <penguin-kernel@xxxxxxxxxxxxxxxxxxx> wrote: > > On 2024/07/31 14:05, Barry Song wrote: > > Jason, > > Thank you very much. Also, Tetsuo reminded me that kmalloc_array() might be > > problematic if the count is too large: > > pages = kmalloc_array(count, sizeof(*pages), GFP_KERNEL | __GFP_NOFAIL); > > If "count" is guaranteed to be count <= 16, this might be tolerable. It's not unfortunately, the maximum bounce buffer size is: #define VDUSE_MAX_BOUNCE_SIZE (1024 * 1024 * 1024) > > Consider a situation where current thread was chosen as an global OOM victim. > Trying to allocate "count" pages using > > for (i = 0; i < count; i++) > pages[i] = alloc_page(GFP_KERNEL | __GFP_NOFAIL); > > is not good. Right, I wonder if we need to add a shrink to reclaim the pages that belong to VDUSE bounce pages. > > > > > You might want to consider using vmalloc_array() or kvmalloc_array() instead > > when you send a new version. > > There is a limitation at https://elixir.bootlin.com/linux/v6.11-rc1/source/mm/page_alloc.c#L3033 > that you must satisfy count <= PAGE_SIZE * 2 / sizeof(*pages) if you use __GFP_NOFAIL. > > But as already explained above, allocating 1024 pages (assuming PAGE_SIZE is 4096 and > pointer size is 8) when current thread was chosen as an OOM victim is not recommended. > You should implement proper error handling instead of using __GFP_NOFAIL if count can > become large. I think I need to consider a way to avoid __GFP_NOFAIL. A easy way is not to free the kernel bounce pages, then we don't need to allocate them again. Thanks > >