Re: [PATCH] mm, vmscan: guarantee drop_slab_node() termination

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Aug 18, 2021 at 05:22:39PM +0200, Vlastimil Babka wrote:
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 403a175a720f..ef3554314b47 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -936,6 +936,7 @@ static unsigned long shrink_slab(gfp_t gfp_mask, int nid,
>  void drop_slab_node(int nid)
>  {
>  	unsigned long freed;
> +	int shift = 0;
>  
>  	do {
>  		struct mem_cgroup *memcg = NULL;
> @@ -948,7 +949,7 @@ void drop_slab_node(int nid)
>  		do {
>  			freed += shrink_slab(GFP_KERNEL, nid, memcg, 0);
>  		} while ((memcg = mem_cgroup_iter(NULL, memcg, NULL)) != NULL);
> -	} while (freed > 10);
> +	} while ((freed >> shift++) > 0);

This can, if you're really unlucky, produce UB.  If you free 2^63 items
when shift is 63, then 2^63 >> 63 is 1 and shift becomes 64, producing
UB on the next iteration.  We could do:

	} while (shift < BITS_PER_LONG) && (freed >> shift++) > 0);

but honestly, that feels silly.  How about:

	} while ((freed >> shift++) > 1);

almost exactly as arbitrary, but guarantees no UB.




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux