Le 05/03/2014 15:34, Guang Yang a
écrit :
Hello all,
Hellon
Recently I am working on Ceph performance analysis on our
cluster, our OSD hardware looks like:
11 SATA disks, 4TB for each, 7200RPM
48GB RAM
When break down the latency, we found that half of the
latency (average latency is around 60 milliseconds via radosgw)
comes from file lookup and open (there could be a couple of disk
seeks there). When looking at the file system cache (slabtop),
we found that around 5M dentry / inodes are cached, however, the
host has around 110 million files (and directories) in total.
I am wondering if there is any good experience within
community tunning for the same workload, e.g. change the in ode
size ? use mkfs.xfs -n size=64k option[1] ?
beware, this particular option can trigger weird behaviour :
see :
ceph bugs #6301,
and
http://oss.sgi.com/archives/xfs/2013-12/msg00087.html
looking the logs on git kernel repo, AFAICS the patch only has been
integrated on 3.14rc1, and has not been backported (commit
b3f03bac8132207a20286d5602eda64500c19724).
Cheers,
|
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com