Re: [experimental][PATCH] mm,vmstat: per cpu stat flush too when per cpu page cache flushed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> On Wed, Oct 13, 2010 at 04:10:43PM +0900, KOSAKI Motohiro wrote:
> > When memory shortage, we are using drain_pages() for flushing per cpu
> > page cache. In this case, per cpu stat should be flushed too. because
> > now we are under memory shortage and we need to know exact free pages.
> > 
> > Otherwise get_page_from_freelist() may fail even though pcp was flushed.
> > 
> 
> With my patch adjusting the threshold to a small value while kswapd is awake,
> it seems less necessary. 

I agree this.

> It's also very hard to predict the performance of
> this. We are certainly going to take a hit to do the flush but we *might*
> gain slightly if an allocation succeeds because a watermark check passed
> when the counters were updated. It's a definite hit for a possible gain
> though which is not a great trade-off. Would need some performance testing.
> 
> I still think my patch on adjusting thresholds is our best proposal so
> far on how to reduce Shaohua's performance problems while still being
> safer from livelocks due to memory exhaustion.

OK, I will try to explain a detai of my worry.

Initial variable ZVC commit (df9ecaba3f1) says 

>     [PATCH] ZVC: Scale thresholds depending on the size of the system
> 
>     The ZVC counter update threshold is currently set to a fixed value of 32.
>     This patch sets up the threshold depending on the number of processors and
>     the sizes of the zones in the system.
> 
>     With the current threshold of 32, I was able to observe slight contention
>     when more than 130-140 processors concurrently updated the counters.  The
>     contention vanished when I either increased the threshold to 64 or used
>     Andrew's idea of overstepping the interval (see ZVC overstep patch).
> 
>     However, we saw contention again at 220-230 processors.  So we need higher
>     values for larger systems.

So, I'm worry about your patch reintroduce old cache contention issue that Christoph
observed when run 128-256cpus system.  May I ask how do you think this issue?




--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxxx  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]