[Cc Vasily] On Wed 03-11-21 21:07:03, Uladzislau Rezki wrote: > That extra variable has been introduced just for keeping an original > passed gfp_mask because it is updated with __GFP_NOWARN on entry, thus > error handling messages were broken. I am not sure what you mean by "error handling messages were broken" part. It is true that the current Linus tree has a broken allocation failure reporting but that is not a fault of orig_gfp_mask. In fact patch which is fixing that "mm/vmalloc: repair warn_alloc()s in __vmalloc_area_node()" currently in akpm tree is adding the additional mask. > Instead we can keep an original gfp_mask without modifying it and add > an extra __GFP_NOWARN flag together with gfp_mask as a parameter to > the vm_area_alloc_pages() function. It will make it less confused. I would tend to agree that this is a better approach. There is already nested_gfp mask and one more doesn't add to the readbility. Maybe we should just drop the above patch and just go with one which doesn't introduce the intermediate step and an additional gfp mask. > Signed-off-by: Uladzislau Rezki (Sony) <urezki@xxxxxxxxx> > --- > mm/vmalloc.c | 13 ++++++------- > 1 file changed, 6 insertions(+), 7 deletions(-) > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > index d2a00ad4e1dd..3b549ff5c95e 100644 > --- a/mm/vmalloc.c > +++ b/mm/vmalloc.c > @@ -2920,7 +2920,6 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, > int node) > { > const gfp_t nested_gfp = (gfp_mask & GFP_RECLAIM_MASK) | __GFP_ZERO; > - const gfp_t orig_gfp_mask = gfp_mask; > unsigned long addr = (unsigned long)area->addr; > unsigned long size = get_vm_area_size(area); > unsigned long array_size; > @@ -2928,7 +2927,7 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, > unsigned int page_order; > > array_size = (unsigned long)nr_small_pages * sizeof(struct page *); > - gfp_mask |= __GFP_NOWARN; > + > if (!(gfp_mask & (GFP_DMA | GFP_DMA32))) > gfp_mask |= __GFP_HIGHMEM; > > @@ -2941,7 +2940,7 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, > } > > if (!area->pages) { > - warn_alloc(orig_gfp_mask, NULL, > + warn_alloc(gfp_mask, NULL, > "vmalloc error: size %lu, failed to allocated page array size %lu", > nr_small_pages * PAGE_SIZE, array_size); > free_vm_area(area); > @@ -2951,8 +2950,8 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, > set_vm_area_page_order(area, page_shift - PAGE_SHIFT); > page_order = vm_area_page_order(area); > > - area->nr_pages = vm_area_alloc_pages(gfp_mask, node, > - page_order, nr_small_pages, area->pages); > + area->nr_pages = vm_area_alloc_pages(gfp_mask | __GFP_NOWARN, > + node, page_order, nr_small_pages, area->pages); > > atomic_long_add(area->nr_pages, &nr_vmalloc_pages); > > @@ -2961,7 +2960,7 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, > * allocation request, free them via __vfree() if any. > */ > if (area->nr_pages != nr_small_pages) { > - warn_alloc(orig_gfp_mask, NULL, > + warn_alloc(gfp_mask, NULL, > "vmalloc error: size %lu, page order %u, failed to allocate pages", > area->nr_pages * PAGE_SIZE, page_order); > goto fail; > @@ -2969,7 +2968,7 @@ static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask, > > if (vmap_pages_range(addr, addr + size, prot, area->pages, > page_shift) < 0) { > - warn_alloc(orig_gfp_mask, NULL, > + warn_alloc(gfp_mask, NULL, > "vmalloc error: size %lu, failed to map pages", > area->nr_pages * PAGE_SIZE); > goto fail; > -- > 2.17.1 -- Michal Hocko SUSE Labs