Re: [rfc][patch] store-free path walking

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Oct 07, 2009 at 11:56:57AM +0200, Jens Axboe wrote:
> On Wed, Oct 07 2009, Nick Piggin wrote:
> > Anyway, this is the basics working for now, microbenchmark shows
> > same-cwd lookups scale linearly now too. We can probably slowly
> > tackle more cases if they come up as being important, simply by
> > auditing filesystems etc.
> 
>                                 throughput
> ------------------------------------------------
> 2.6.32-rc3-git          |      561.218 MB/sec
> 2.6.32-rc3-git+patch    |      627.022 MB/sec
> 2.6.32-rc3-git+patch+inc|      969.761 MB/sec
> 
> So better, quite a bit too. Latencies are not listed here, but they are
> also a lot better. Perf top still shows ~95% spinlock time. I did a
> shorter run (the above are full 600 second runs) of 60s with profiling
> and the full 64 clients, this time using -a as well (which generated
> 9.4GB of trace data!). The top is now:

Hey Jens,

Try changing the 'statvfs' syscall in dbench to 'statfs'.
glibc has to do some nasty stuff parsing /proc/mounts to
make statvfs work. On my 2s8c opteron it goes like this:
clients     vanilla kernel     vfs scale (MB/s)
1            476                447
2           1092               1128
4           2027               2260
8           2398               4200

Single threaded performance isn't as good so I need to look
at the reasons for that :(. But it's practically linearly
scalable now. The dropoff at 8 I'd say is probably due to
the memory controllers running out of steam rather than
cacheline or lock contention.

Unfortunately we didn't just do this posix API in-kernel,
and statfs is Linux-specific. But we do have some spare
room in statfs structure I think to pass back mount flags
for statvfs.

Tridge, Samba people: measuring vfs performance with dbench
in my effort to improve Linux vfs scalability has shown up
the statvfs syscall you make to be the final problematic
issue for this workload. In particular reading /proc/mounts
that glibc does to impement it. We could add complexity to
the kernel to try improving it, or we could extend the
statfs syscall so glibc can avoid the issue (requiring
glibc upgrade). But I would like to know whether samba
really uses statvfs() significantly?

Thanks,
Nick
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux