Re: [PATCH 7/8] writeback: sync old inodes first in background writeback

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi, Wu. 
Thanks for Cced me. 

AFAIR, we discussed this by private mail and didn't conclude yet. 
Let's start from beginning. 

On Thu, Jul 22, 2010 at 05:21:55PM +0800, Wu Fengguang wrote:
> > I guess this new patch is more problem oriented and acceptable:
> > 
> > --- linux-next.orig/mm/vmscan.c	2010-07-22 16:36:58.000000000 +0800
> > +++ linux-next/mm/vmscan.c	2010-07-22 16:39:57.000000000 +0800
> > @@ -1217,7 +1217,8 @@ static unsigned long shrink_inactive_lis
> >  			count_vm_events(PGDEACTIVATE, nr_active);
> >  
> >  			nr_freed += shrink_page_list(&page_list, sc,
> > -							PAGEOUT_IO_SYNC);
> > +					priority < DEF_PRIORITY / 3 ?
> > +					PAGEOUT_IO_SYNC : PAGEOUT_IO_ASYNC);
> >  		}
> >  
> >  		nr_reclaimed += nr_freed;
> 
> This one looks better:
> ---
> vmscan: raise the bar to PAGEOUT_IO_SYNC stalls
> 
> Fix "system goes totally unresponsive with many dirty/writeback pages"
> problem:
> 
> 	http://lkml.org/lkml/2010/4/4/86
> 
> The root cause is, wait_on_page_writeback() is called too early in the
> direct reclaim path, which blocks many random/unrelated processes when
> some slow (USB stick) writeback is on the way.
> 
> A simple dd can easily create a big range of dirty pages in the LRU
> list. Therefore priority can easily go below (DEF_PRIORITY - 2) in a
> typical desktop, which triggers the lumpy reclaim mode and hence
> wait_on_page_writeback().

I see oom message. order is zero. 
How is lumpy reclaim work?
For working lumpy reclaim, we have to meet priority < 10 and sc->order > 0.

Please, clarify the problem.

> 
> In Andreas' case, 512MB/1024 = 512KB, this is way too low comparing to
> the 22MB writeback and 190MB dirty pages. There can easily be a

What's 22MB and 190M?
It would be better to explain more detail. 
I think the description has to be clear as summary of the problem 
without the above link. 

Thanks for taking out this problem, again. :)
-- 
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux