Re: XFS tunning on OSD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

We experience something similar with our Openstack Swift setup.
You can change the sysstl "vm.vfs_cache_pressure" to make sure more inodes are being kept in cache.
(Do not set this to 0 because you will trigger the OOM killer at some point ;)

We also decided to go for nodes with more memory and smaller disks.
You can read about our experiences here:
http://engineering.spilgames.com/openstack-swift-lots-small-files/

Cheers,
Robert

> From: ceph-users-bounces@xxxxxxxxxxxxxx [ceph-users-bounces@xxxxxxxxxxxxxx] on behalf of Guang Yang [yguang11@xxxxxxxxx]
>Hello all,
> Recently I am working on Ceph performance analysis on our cluster, our OSD hardware looks like:
> 11 SATA disks, 4TB for each, 7200RPM
> 48GB RAM
>
> When break down the latency, we found that half of the latency (average latency is around 60 milliseconds via radosgw) comes from file lookup and open
> (there could be a couple of disk seeks there). When looking at the file system  cache (slabtop), we found
> that around 5M dentry / inodes are cached, however, the host has around 110 million files (and directories) in total.
>
> I am wondering if there is any good experience within community tunning for the same workload, e.g. change the in ode size ? use mkfs.xfs -n size=64k option[1] ?

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux