Re: [PATCH 4/8] vmscan: Do not writeback filesystem pages in direct reclaim

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> ==== CUT HERE ====
> vmscan: Do not writeback filesystem pages in direct reclaim
> 
> When memory is under enough pressure, a process may enter direct
> reclaim to free pages in the same manner kswapd does. If a dirty page is
> encountered during the scan, this page is written to backing storage using
> mapping->writepage. This can result in very deep call stacks, particularly
> if the target storage or filesystem are complex. It has already been observed
> on XFS that the stack overflows but the problem is not XFS-specific.
> 
> This patch prevents direct reclaim writing back filesystem pages by checking
> if current is kswapd or the page is anonymous before writing back.  If the
> dirty pages cannot be written back, they are placed back on the LRU lists
> for either background writing by the BDI threads or kswapd. If in direct
> lumpy reclaim and dirty pages are encountered, the process will stall for
> the background flusher before trying to reclaim the pages again.
> 
> As the call-chain for writing anonymous pages is not expected to be deep
> and they are not cleaned by flusher threads, anonymous pages are still
> written back in direct reclaim.

This is also a good step towards reducing pageout() calls. For better
IO performance the flusher threads should take more work from pageout().

> Signed-off-by: Mel Gorman <mel@xxxxxxxxx>
> Acked-by: Rik van Riel <riel@xxxxxxxxxx>
> ---
>  mm/vmscan.c |   55 +++++++++++++++++++++++++++++++++++++++----------------
>  1 files changed, 39 insertions(+), 16 deletions(-)
> 
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 6587155..45d9934 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -139,6 +139,9 @@ static DECLARE_RWSEM(shrinker_rwsem);
>  #define scanning_global_lru(sc)        (1)
>  #endif
> 
> +/* Direct lumpy reclaim waits up to 5 seconds for background cleaning */
> +#define MAX_SWAP_CLEAN_WAIT 50
> +
>  static struct zone_reclaim_stat *get_reclaim_stat(struct zone *zone,
>                                                   struct scan_control *sc)
>  {
> @@ -644,11 +647,13 @@ static noinline_for_stack void free_page_list(struct list_head *free_pages)
>   */
>  static unsigned long shrink_page_list(struct list_head *page_list,
>                                         struct scan_control *sc,
> -                                       enum pageout_io sync_writeback)
> +                                       enum pageout_io sync_writeback,
> +                                       unsigned long *nr_still_dirty)
>  {
>         LIST_HEAD(ret_pages);
>         LIST_HEAD(free_pages);
>         int pgactivate = 0;
> +       unsigned long nr_dirty = 0;
>         unsigned long nr_reclaimed = 0;
> 
>         cond_resched();
> @@ -742,6 +747,15 @@ static unsigned long shrink_page_list(struct list_head *page_list,
>                 }
> 
>                 if (PageDirty(page)) {
> +                       /*
> +                        * Only kswapd can writeback filesystem pages to
> +                        * avoid risk of stack overflow
> +                        */
> +                       if (page_is_file_cache(page) && !current_is_kswapd()) {
> +                               nr_dirty++;
> +                               goto keep_locked;
> +                       }
> +
>                         if (references == PAGEREF_RECLAIM_CLEAN)
>                                 goto keep_locked;
>                         if (!may_enter_fs)
> @@ -858,7 +872,7 @@ keep:
> 
>         free_page_list(&free_pages);
> 
> -       list_splice(&ret_pages, page_list);
> +       *nr_still_dirty = nr_dirty;
>         count_vm_events(PGACTIVATE, pgactivate);
>         return nr_reclaimed;
>  }
> @@ -1245,6 +1259,7 @@ shrink_inactive_list(unsigned long nr_to_scan, struct zone *zone,
>         unsigned long nr_active;
>         unsigned long nr_anon;
>         unsigned long nr_file;
> +       unsigned long nr_dirty;
> 
>         while (unlikely(too_many_isolated(zone, file, sc))) {
>                 congestion_wait(BLK_RW_ASYNC, HZ/10);
> @@ -1293,26 +1308,34 @@ shrink_inactive_list(unsigned long nr_to_scan, struct zone *zone,
> 
>         spin_unlock_irq(&zone->lru_lock);
> 
> -       nr_reclaimed = shrink_page_list(&page_list, sc, PAGEOUT_IO_ASYNC);
> +       nr_reclaimed = shrink_page_list(&page_list, sc, PAGEOUT_IO_ASYNC,
> +                                                               &nr_dirty);
> 
>         /*
> -        * If we are direct reclaiming for contiguous pages and we do
> +        * If specific pages are needed such as with direct reclaiming
> +        * for contiguous pages or for memory containers and we do
>          * not reclaim everything in the list, try again and wait
> -        * for IO to complete. This will stall high-order allocations
> -        * but that should be acceptable to the caller
> +        * for IO to complete. This will stall callers that require
> +        * specific pages but it should be acceptable to the caller
>          */
> -       if (nr_reclaimed < nr_taken && !current_is_kswapd() &&
> -                       sc->lumpy_reclaim_mode) {
> -               congestion_wait(BLK_RW_ASYNC, HZ/10);
> +       if (sc->may_writepage && !current_is_kswapd() &&
> +                       (sc->lumpy_reclaim_mode || sc->mem_cgroup)) {
> +               int dirty_retry = MAX_SWAP_CLEAN_WAIT;
> 
> -               /*
> -                * The attempt at page out may have made some
> -                * of the pages active, mark them inactive again.
> -                */
> -               nr_active = clear_active_flags(&page_list, NULL);
> -               count_vm_events(PGDEACTIVATE, nr_active);
> +               while (nr_reclaimed < nr_taken && nr_dirty && dirty_retry--) {
> +                       wakeup_flusher_threads(laptop_mode ? 0 : nr_dirty);
> +                       congestion_wait(BLK_RW_ASYNC, HZ/10);

It needs good luck for the flusher threads to "happen to" sync the
dirty pages in our page_list. I'd rather take the logic as "there are
too many dirty pages, shrink them to avoid some future pageout() calls
and/or congestion_wait() stalls".

So the loop is likely to repeat MAX_SWAP_CLEAN_WAIT times.  Let's remove it?

> -               nr_reclaimed += shrink_page_list(&page_list, sc, PAGEOUT_IO_SYNC);
> +                       /*
> +                        * The attempt at page out may have made some
> +                        * of the pages active, mark them inactive again.
> +                        */
> +                       nr_active = clear_active_flags(&page_list, NULL);
> +                       count_vm_events(PGDEACTIVATE, nr_active);
> +
> +                       nr_reclaimed += shrink_page_list(&page_list, sc,
> +                                               PAGEOUT_IO_SYNC, &nr_dirty);

This shrink_page_list() won't be called at all if nr_dirty==0 and
pageout() was called. This is a change of behavior. It can also be
fixed by removing the loop.

Thanks,
Fengguang

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxxx  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>


[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]