Hi,
Am 2016-05-10 05:48, schrieb Geocast:
Hi members,
We have 21 hosts for ceph OSD servers, each host has 12 SATA disks (4TB
each), 64GB memory.
ceph version 10.2.0, Ubuntu 16.04 LTS
The whole cluster is new installed.
Can you help check what the arguments we put in ceph.conf is reasonable
or
not?
thanks.
[osd]
osd_data = /var/lib/ceph/osd/ceph-$id
osd_journal_size = 20000
osd_mkfs_type = xfs
osd_mkfs_options_xfs = -f
filestore_xattr_use_omap = true
filestore_min_sync_interval = 10
filestore_max_sync_interval = 15
filestore_queue_max_ops = 25000
filestore_queue_max_bytes = 10485760
filestore_queue_committing_max_ops = 5000
filestore_queue_committing_max_bytes = 10485760000
journal_max_write_bytes = 1073714824
journal_max_write_entries = 10000
journal_queue_max_ops = 50000
journal_queue_max_bytes = 10485760000
osd_max_write_size = 512
osd_client_message_size_cap = 2147483648
osd_deep_scrub_stride = 131072
osd_op_threads = 8
osd_disk_threads = 4
osd_map_cache_size = 1024
osd_map_cache_bl_size = 128
osd_mount_options_xfs = "rw,noexec,nodev,noatime,nodiratime,nobarrier"
I have this settings (to avoid fragmentation):
osd mount options xfs =
"rw,noatime,inode64,logbufs=8,logbsize=256k,allocsize=4M"
osd mkfs options xfs = "-f -i size=2048"
Udo
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com