XFS tunning on OSD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello all,
Recently I am working on Ceph performance analysis on our cluster, our OSD hardware looks like:
  11 SATA disks, 4TB for each, 7200RPM
   48GB RAM

When break down the latency, we found that half of the latency (average latency is around 60 milliseconds via radosgw) comes from file lookup and open (there could be a couple of disk seeks there). When looking at the file system  cache (slabtop), we found that around 5M dentry / inodes are cached, however, the host has around 110 million files (and directories) in total.

I am wondering if there is any good experience within community tunning for the same workload, e.g. change the in ode size ? use mkfs.xfs -n size=64k option[1] ?


Thanks,
Guang
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux