Re: [PATCH V2 4/4] cpuset,mm: update task's mems_allowed lazily

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 31 Mar 2010, Miao Xie wrote:

> diff --git a/mm/mmzone.c b/mm/mmzone.c
> index f5b7d17..43ac21b 100644
> --- a/mm/mmzone.c
> +++ b/mm/mmzone.c
> @@ -58,6 +58,7 @@ struct zoneref *next_zones_zonelist(struct zoneref *z,
>  					nodemask_t *nodes,
>  					struct zone **zone)
>  {
> +	nodemask_t tmp_nodes;
>  	/*
>  	 * Find the next suitable zone to use for the allocation.
>  	 * Only filter based on nodemask if it's set
> @@ -65,10 +66,16 @@ struct zoneref *next_zones_zonelist(struct zoneref *z,
>  	if (likely(nodes == NULL))
>  		while (zonelist_zone_idx(z) > highest_zoneidx)
>  			z++;
> -	else
> -		while (zonelist_zone_idx(z) > highest_zoneidx ||
> -				(z->zone && !zref_in_nodemask(z, nodes)))
> -			z++;
> +	else {
> +		tmp_nodes = *nodes;
> +		if (nodes_empty(tmp_nodes))
> +			while (zonelist_zone_idx(z) > highest_zoneidx)
> +				z++;
> +		else
> +			while (zonelist_zone_idx(z) > highest_zoneidx ||
> +				(z->zone && !zref_in_nodemask(z, &tmp_nodes)))
> +				z++;
> +	}
>  
>  	*zone = zonelist_zone(z);
>  	return z;

Unfortunately, you can't allocate a nodemask_t on the stack here because 
this is used in the iteration for get_page_from_freelist() which can occur 
very deep in the stack already and there's a probability of overflow.  
Dynamically allocating a nodemask_t simply wouldn't scale here, either, 
since it would allocate on every iteration of a zonelist.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxxx  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>

[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]