Re: IO wait high on XFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We use ceph on a 3 server Debian proxmox cluster that has 4 x4TB disks each
in a shared Ceph cluster.  The io wait is much too high (around 9%).  The
default file system is XFS and I found a suggestion that by adding this line

osd mount options xfs = rw,noatime,inode64

into the global area of our ceph.conf file we can expect significant
performance improvement.  Should we have any concerns about adding this into
our live ceph.conf file?  Here is our current config file.  All suggestions
welcome.

[global]
	 auth client required = cephx
	 auth cluster required = cephx
	 auth service required = cephx
	 cluster network = 10.10.10.0/24
	 filestore xattr use omap = true
	 fsid = a1ee9e98-3b8d-4929-816d-ed15576efaa9
	 keyring = /etc/pve/priv/$cluster.$name.keyring
	 osd journal size = 20480
	 osd pool default min size = 1
	 public network = 10.10.10.0/24

mon_pg_warn_max_per_osd = 0

osd_op_threads = 5
osd_op_num_threads_per_shard = 1
osd_op_num_shards = 25
#osd_op_num_sharded_pool_threads = 25
filestore_op_threads = 4

ms_nocrc = true
filestore_fd_cache_size = 64
filestore_fd_cache_shards = 32

[client]
#rbd cache = true
rbd cache size = 67108864 # (64MB)
rbd cache max dirty = 50331648 # (48MB)
rbd cache target dirty = 33554432 # (32MB)
rbd cache max dirty age = 2
rbd cache writethrough until flush = true

[osd]
	 keyring = /var/lib/ceph/osd/ceph-$id/keyring


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux