On Wed, May 18, 2011 at 1:15 AM, Mel Gorman <mgorman@xxxxxxx> wrote: > It has been reported on some laptops that kswapd is consuming large > amounts of CPU and not being scheduled when SLUB is enabled during > large amounts of file copying. It is expected that this is due to > kswapd missing every cond_resched() point because; > > shrink_page_list() calls cond_resched() if inactive pages were isolated > Â Â Â Âwhich in turn may not happen if all_unreclaimable is set in > Â Â Â Âshrink_zones(). If for whatver reason, all_unreclaimable is > Â Â Â Âset on all zones, we can miss calling cond_resched(). > > balance_pgdat() only calls cond_resched if the zones are not > Â Â Â Âbalanced. For a high-order allocation that is balanced, it > Â Â Â Âchecks order-0 again. During that window, order-0 might have > Â Â Â Âbecome unbalanced so it loops again for order-0 and returns > Â Â Â Âthat it was reclaiming for order-0 to kswapd(). It can then > Â Â Â Âfind that a caller has rewoken kswapd for a high-order and > Â Â Â Âre-enters balance_pgdat() without ever calling cond_resched(). > > shrink_slab only calls cond_resched() if we are reclaiming slab > Â Â Â Âpages. If there are a large number of direct reclaimers, the > Â Â Â Âshrinker_rwsem can be contended and prevent kswapd calling > Â Â Â Âcond_resched(). > > This patch modifies the shrink_slab() case. If the semaphore is > contended, the caller will still check cond_resched(). After each > successful call into a shrinker, the check for cond_resched() is > still necessary in case one shrinker call is particularly slow. > > This patch replaces > mm-vmscan-if-kswapd-has-been-running-too-long-allow-it-to-sleep.patch > in -mm. > > [mgorman@xxxxxxx: Preserve call to cond_resched after each call into shrinker] > From: Minchan Kim <minchan.kim@xxxxxxxxx> Signed-off-by: Minchan Kim <minchan.kim@xxxxxxxxx> > Signed-off-by: Mel Gorman <mgorman@xxxxxxx> -- Kind regards, Minchan Kim -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html