Re:

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On Fri, 8 May 2009, Brandon Casey wrote:
> 
> Before (cold cache):
> % time     seconds  usecs/call     calls    errors syscall
> ------ ----------- ----------- --------- --------- ----------------
>  98.60    6.365501         111     57432           lstat64
> 
> After (cold cache, no lstat fix, just cache_preload):
> % time     seconds  usecs/call     calls    errors syscall
> ------ ----------- ----------- --------- --------- ----------------
>  90.90   23.717981         413     57432           lstat64

Yes, interesting. I really smells like it's all fixed performance and 
there is a single lock around it. That 111us -> 413us increase is very 
consistent with four cores all serializing on the same lock. So it 
parallelizes to all four cores, but then will take exactly as long in 
total.

Quite frankly, 2.6.9 is so old that I have absolutely _no_ memory of what 
we used to do back then. Not that I follow NFS all that much even now - I 
did some of the original page cache and dentry work on the Linux NFS 
client way back when, but that was when I actually used NFS (and we were 
converting everything to the page cache).

I've long since forgotten everything I knew, and I'm just as happy about 
that. But clearly something is bad, and equally clearly it worked much 
better for you a couple of months ago. Which does imply that there's 
probably some centos issues.

Can you ask your MIS people if it would be possible to at least _test_ a 
new kernel? In 2.6.9, I'm quite frankly inclined to just say "it will 
likely never get fixed unless centos knows what it is", but if you test a 
more modern kernel and see similar issues, then I'll be intrigued.

It's kind of sad, but at the same time, NFS was using the BKL up into 
2.6.26 or something like that (about a year ago). And your kernel is 
based on something _much_ older.

That said, even with the BKL, NFS should allow all the actual IO to be 
done in parallel (since the BKL is dropped on scheduling). But it's really 
wasting a _lot_ of CPU time, and that hurts you enormously, even though 
the cold-cache case still seems to win, judging by your other email:

> Best without patch: 6.02 (systime 1.57)
> 
>   0.43user 1.57system 0:06.02elapsed 33%CPU (0avgtext+0avgdata 0maxresident)k
>   5336inputs+0outputs (12major+15472minor)pagefaults 0swaps
> 
> Best with patch (preload_cache,lstat reduction): 2.69 (systime 10.47)
> 
>   0.45user 10.47system 0:02.69elapsed 405%CPU (0avgtext+0avgdata 0maxresident)k
>   5336inputs+0outputs (12major+13985minor)pagefaults 0swaps

so there's a _huge_ increase in system time (again), but the change from 
33% CPU -> 405% CPU makes up for it and you get lower elapsed times.

But that 7x increase in system time really is sad. I do suspect it's 
likely due to spinning on the BKL. And if so, then a modern kernel should 
fix it.

			Linus
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]