On Mon, 23 May 2011 10:53:55 +0100 Mel Gorman <mgorman@xxxxxxx> wrote: > It has been reported on some laptops that kswapd is consuming large > amounts of CPU and not being scheduled when SLUB is enabled during > large amounts of file copying. It is expected that this is due to > kswapd missing every cond_resched() point because; > > shrink_page_list() calls cond_resched() if inactive pages were isolated > which in turn may not happen if all_unreclaimable is set in > shrink_zones(). If for whatver reason, all_unreclaimable is > set on all zones, we can miss calling cond_resched(). > > balance_pgdat() only calls cond_resched if the zones are not > balanced. For a high-order allocation that is balanced, it > checks order-0 again. During that window, order-0 might have > become unbalanced so it loops again for order-0 and returns > that it was reclaiming for order-0 to kswapd(). It can then > find that a caller has rewoken kswapd for a high-order and > re-enters balance_pgdat() without ever calling cond_resched(). > > shrink_slab only calls cond_resched() if we are reclaiming slab > pages. If there are a large number of direct reclaimers, the > shrinker_rwsem can be contended and prevent kswapd calling > cond_resched(). > > This patch modifies the shrink_slab() case. If the semaphore is > contended, the caller will still check cond_resched(). After each > successful call into a shrinker, the check for cond_resched() remains > in case one shrinker is particularly slow. So CONFIG_PREEMPT=y kernels don't exhibit this problem? I'm still unconvinced that we know what's going on here. What's kswapd *doing* with all those cycles? And if kswapd is now scheduling away, who is doing that work instead? Direct reclaim? -- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html