Re: [PATCH 0/8] Reduce latencies and improve overall reclaim efficiency v2

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Nov 03, 2010 at 11:50:35AM +0100, Christian Ehrhardt wrote:
> 
> 
> On 10/18/2010 03:55 PM, Mel Gorman wrote:
> > On Thu, Oct 14, 2010 at 05:28:33PM +0200, Christian Ehrhardt wrote:
> > 
> >> Seing the patches Mel sent a few weeks ago I realized that this series
> >> might be at least partially related to my reports in 1Q 2010 - so I ran my
> >> testcase on a few kernels to provide you with some more backing data.
> > 
> > Thanks very much for revisiting this.
> > 
> >> Results are always the average of three iozone runs as it is known to be somewhat noisy - especially when affected by the issue I try to show here.
> >> As discussed in detail in older threads the setup uses 16 disks and scales the number of concurrent iozone processes.
> >> Processes are evenly distributed so that it always is one process per disk.
> >> In the past we reported 40% to 80% degradation for the sequential read case based on 2.6.32 which can still be seen.
> >> What we found was that the allocations for page cache with GFP_COLD flag loop a long time between try_to_free, get_page, reclaim as free makes some progress and due to that GFP_COLD allocations can loop and retry.
> >> In addition my case had no writes at all, which forced congestion_wait to wait the full timeout all the time.
> >>
> >> Kernel (git)                   4          8         16   deviation #16 case                           comment
> >> linux-2.6.30              902694    1396073    1892624                 base                              base
> >> linux-2.6.32              752008     990425     932938               -50.7%     impact as reported in 1Q 2010
> >> linux-2.6.35               63532      71573      64083               -96.6%                    got even worse
> >> linux-2.6.35.6            176485     174442     212102               -88.8%  fixes useful, but still far away
> >> linux-2.6.36-rc4-trace    119683     188997     187012               -90.1%                         still bad
> >> linux-2.6.36-rc4-fix      884431    1114073    1470659               -22.3%            Mels fixes help a lot!
> >>
> [...]
> > If all goes according to plan,
> > kernel 2.6.37-rc1 will be of interest. Thanks again.
> 
> Here a measurement with 2.6.37-rc1 as confirmation of progress:
>    linux-2.6.37-rc1          876588    1161876    1643430               -13.1%       even better than 2.6.36-fix
> 

Ok, great. There were a few other changes related to reclaim and
writeback that I expected to help, but was not certain. It's good to
have confirmation.

> That means 2.6.37-rc1 really shows what we hoped for.
> And it eventually even turned out a little bit better than 2.6.36 + your fixes.
> 

Good. I looked over your data and I see we are still losing time but I
haven't new ideas on how to improve it further yet without falling into the
"special case" hole. I'll keep on it and hopefully we can get parity
performance on read while still keeping the write improvements.

Thanks a lot for testing this.

-- 
Mel Gorman
Part-time Phd Student                          Linux Technology Center
University of Limerick                         IBM Dublin Software Lab
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux