Re: [PATCH] mm,page_alloc: softlockup on warn_alloc on

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Johannes Weiner wrote:
> How can we figure out if there is a bug here? Can we time the calls to
> __alloc_pages_direct_reclaim() and __alloc_pages_direct_compact() and
> drill down from there? Print out the number of times we have retried?
> We're counting no_progress_loops, but we are also very much interested
> in progress_loops that didn't result in a successful allocation. Too
> many of those and I think we want to OOM kill as per above.
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index bec5e96f3b88..01736596389a 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -3830,6 +3830,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  			"page allocation stalls for %ums, order:%u",
>  			jiffies_to_msecs(jiffies-alloc_start), order);
>  		stall_timeout += 10 * HZ;
> +		goto oom;
>  	}
>  
>  	/* Avoid recursion of direct reclaim */
> @@ -3882,6 +3883,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
>  	if (read_mems_allowed_retry(cpuset_mems_cookie))
>  		goto retry_cpuset;
>  
> +oom:
>  	/* Reclaim has failed us, start killing things */
>  	page = __alloc_pages_may_oom(gfp_mask, order, ac, &did_some_progress);
>  	if (page)
> 

According to my stress tests, it is mutex_trylock() in __alloc_pages_may_oom()
that causes warn_alloc() to be called for so many times. The comment

	/*
	 * Acquire the oom lock.  If that fails, somebody else is
	 * making progress for us.
	 */

is true only if the owner of oom_lock can call out_of_memory() and is __GFP_FS
allocation. Consider a situation where there are 1 GFP_KERNEL allocating thread
and 99 GFP_NOFS/GFP_NOIO allocating threads contending the oom_lock. How likely
the OOM killer is invoked? It is very unlikely because GFP_KERNEL allocating thread
likely fails to grab oom_lock because GFP_NOFS/GFP_NOIO allocating threads is
grabing oom_lock. And GFP_KERNEL allocating thread yields CPU time for
GFP_NOFS/GFP_NOIO allocating threads to waste pointlessly.
s/!mutex_trylock(&oom_lock)/mutex_lock_killable()/ significantly improves
this situation for my stress tests. How is your case?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]
  Powered by Linux