thanks for a double check on ceph's config

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi members,

We have 21 hosts for ceph OSD servers, each host has 12 SATA disks (4TB each), 64GB memory.
ceph version 10.2.0, Ubuntu 16.04 LTS
The whole cluster is new installed.

Can you help check what the arguments we put in ceph.conf is reasonable or not?
thanks.

[osd]
osd_data = /var/lib/ceph/osd/ceph-$id
osd_journal_size = 20000
osd_mkfs_type = xfs
osd_mkfs_options_xfs = -f
filestore_xattr_use_omap = true
filestore_min_sync_interval = 10
filestore_max_sync_interval = 15
filestore_queue_max_ops = 25000
filestore_queue_max_bytes = 10485760
filestore_queue_committing_max_ops = 5000
filestore_queue_committing_max_bytes = 10485760000
journal_max_write_bytes = 1073714824
journal_max_write_entries = 10000
journal_queue_max_ops = 50000
journal_queue_max_bytes = 10485760000
osd_max_write_size = 512
osd_client_message_size_cap = 2147483648
osd_deep_scrub_stride = 131072
osd_op_threads = 8
osd_disk_threads = 4
osd_map_cache_size = 1024
osd_map_cache_bl_size = 128
osd_mount_options_xfs = "rw,noexec,nodev,noatime,nodiratime,nobarrier"
osd_recovery_op_priority = 4
osd_recovery_max_active = 10
osd_max_backfills = 4

[client]
rbd_cache = true
rbd_cache_size = 268435456
rbd_cache_max_dirty = 134217728
rbd_cache_max_dirty_age = 5

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux