Re: How to handle TIF_MEMDIE stalls?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Mar 02, 2015 at 06:10:58PM +0100, Michal Hocko wrote:
> On Mon 02-03-15 11:05:37, Johannes Weiner wrote:
> > On Mon, Mar 02, 2015 at 04:18:32PM +0100, Michal Hocko wrote:
> [...]
> > > Typical busy system won't be very far away from the high watermark
> > > so there would be a reclaim performed during increased watermaks
> > > (aka reservation) and that might lead to visible performance
> > > degradation. This might be acceptable but it also adds a certain level
> > > of unpredictability when performance characteristics might change
> > > suddenly.
> > 
> > There is usually a good deal of clean cache.  As Dave pointed out
> > before, clean cache can be considered re-allocatable from NOFS
> > contexts, and so we'd only have to maintain this invariant:
> > 
> > 	min_wmark + private_reserves < free_pages + clean_cache
> 
> Do I understand you correctly that we do not have to reclaim clean pages
> as per the above invariant?
> 
> If yes, how do you reflect overcommit on the clean_cache from multiple
> requestor (who are doing reservations)?
> My point was that if we keep clean pages on the LRU rather than forcing
> to reclaim them via increased watermarks then it might happen that
> different callers with access to reserves wouldn't get promissed amount
> of reserved memory because clean_cache is basically a shared resource.

The sum of all private reservations has to be accounted globally, we
obviously can't overcommit the available resources in order to solve
problems stemming from overcommiting the available resources.

The page allocator can't hand out free pages and page reclaim can not
reclaim clean cache unless that invariant is met.  They both have to
consider them consumed.  It's the same as pre-allocation, the only
thing we save is having to actually reclaim the pages and take them
off the freelist at reservation time - which is a good optimization
since the filesystem might not actually need them all.

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs




[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux