Re: [PATCH v5 02/31] vmscan: take at least one pass with shrinkers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, May 09, 2013 at 10:06:19AM +0400, Glauber Costa wrote:
> In very low free kernel memory situations, it may be the case that we
> have less objects to free than our initial batch size. If this is the
> case, it is better to shrink those, and open space for the new workload
> then to keep them and fail the new allocations. For the purpose of
> defining what "very low memory" means, we will purposefuly exclude
> kswapd runs.
> 
> More specifically, this happens because we encode this in a loop with
> the condition: "while (total_scan >= batch_size)". So if we are in such
> a case, we'll not even enter the loop.
> 
> This patch modifies turns it into a do () while {} loop, that will
> guarantee that we scan it at least once, while keeping the behaviour
> exactly the same for the cases in which total_scan > batch_size.
> 
> [ v5: differentiate no-scan case, don't do this for kswapd ]
> 
> Signed-off-by: Glauber Costa <glommer@xxxxxxxxxx>
> Reviewed-by: Dave Chinner <david@xxxxxxxxxxxxx>
> Reviewed-by: Carlos Maiolino <cmaiolino@xxxxxxxxxx>
> CC: "Theodore Ts'o" <tytso@xxxxxxx>
> CC: Al Viro <viro@xxxxxxxxxxxxxxxxxx>
> ---
>  mm/vmscan.c | 24 +++++++++++++++++++++---
>  1 file changed, 21 insertions(+), 3 deletions(-)
> 
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index fa6a853..49691da 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -281,12 +281,30 @@ unsigned long shrink_slab(struct shrink_control *shrink,
>  					nr_pages_scanned, lru_pages,
>  					max_pass, delta, total_scan);
>  
> -		while (total_scan >= batch_size) {
> +		do {
>  			int nr_before;
>  
> +			/*
> +			 * When we are kswapd, there is no need for us to go
> +			 * desperate and try to reclaim any number of objects
> +			 * regardless of batch size. Direct reclaim, OTOH, may
> +			 * benefit from freeing objects in any quantities. If
> +			 * the workload is actually stressing those objects,
> +			 * this may be the difference between succeeding or
> +			 * failing an allocation.
> +			 */
> +			if ((total_scan < batch_size) && current_is_kswapd())
> +				break;
> +			/*
> +			 * Differentiate between "few objects" and "no objects"
> +			 * as returned by the count step.
> +			 */
> +			if (!total_scan)
> +				break;
> +

To reduce the risk of slab reclaiming the world in the reasonable cases
I outlined after the leader mail, I would go further than this and either
limit it to memcg after shrinkers are memcg aware or only do the full scan
if direct reclaim and priority == 0.

What do you think?

-- 
Mel Gorman
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux