On 12/18/2012 11:11 AM, Michal Hocko wrote:
Since e303297 (mm: extended batches for generic mmu_gather) we are batching pages to be freed until either tlb_next_batch cannot allocate a new batch or we are done. This works just fine most of the time but we can get in troubles with non-preemptible kernel (CONFIG_PREEMPT_NONE or CONFIG_PREEMPT_VOLUNTARY) on large machines where too aggressive batching might lead to soft lockups during process exit path (exit_mmap) because there are no scheduling points down the free_pages_and_swap_cache path and so the freeing can take long enough to trigger the soft lockup. The lockup is harmless except when the system is setup to panic on softlockup which is not that unusual. The simplest way to work around this issue is to explicitly cond_resched per batch in tlb_flush_mmu (1020 pages on x86_64).
Signed-off-by: Michal Hocko <mhocko@xxxxxxx> Cc: stable@xxxxxxxxxxxxxxx # 3.0 and higher
Reviewed-by: Rik van Riel <riel@xxxxxxxxxx> -- All rights reversed -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>