On Thu, Jul 21, 2011 at 05:28:48PM +0100, Mel Gorman wrote: > Workloads that are allocating frequently and writing files place a > large number of dirty pages on the LRU. With use-once logic, it is > possible for them to reach the end of the LRU quickly requiring the > reclaimer to scan more to find clean pages. Ordinarily, processes that > are dirtying memory will get throttled by dirty balancing but this > is a global heuristic and does not take into account that LRUs are > maintained on a per-zone basis. This can lead to a situation whereby > reclaim is scanning heavily, skipping over a large number of pages > under writeback and recycling them around the LRU consuming CPU. > > This patch checks how many of the number of pages isolated from the > LRU were dirty. If a percentage of them are dirty, the process will be > throttled if a blocking device is congested or the zone being scanned > is marked congested. The percentage that must be dirty depends on > the priority. At default priority, all of them must be dirty. At > DEF_PRIORITY-1, 50% of them must be dirty, DEF_PRIORITY-2, 25% > etc. i.e. as pressure increases the greater the likelihood the process > will get throttled to allow the flusher threads to make some progress. > > Signed-off-by: Mel Gorman <mgorman@xxxxxxx> > --- > mm/vmscan.c | 21 ++++++++++++++++++--- > 1 files changed, 18 insertions(+), 3 deletions(-) > > diff --git a/mm/vmscan.c b/mm/vmscan.c > index cf7b501..b0060f8 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -720,7 +720,8 @@ static noinline_for_stack void free_page_list(struct list_head *free_pages) > static unsigned long shrink_page_list(struct list_head *page_list, > struct zone *zone, > struct scan_control *sc, > - int priority) > + int priority, > + unsigned long *ret_nr_dirty) > { > LIST_HEAD(ret_pages); > LIST_HEAD(free_pages); > @@ -971,6 +972,7 @@ keep_lumpy: > > list_splice(&ret_pages, page_list); > count_vm_events(PGACTIVATE, pgactivate); > + *ret_nr_dirty += nr_dirty; Note that this includes anon pages, which means that swapping is throttled as well. I don't think it is a downside to throttle swapping during IO congestion - waiting for pages under writeback to become reclaimable is better than kicking off even more IO in this case as well - but the changelog and the comments should include it, I guess. Otherwise, Acked-by: Johannes Weiner <jweiner@xxxxxxxxxx> _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs