Optimize Ceph cluster (kernel, osd, rbd)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi everyone,

I have 3 nodes (running MON and MDS)
and  6 data nodes ( 84 OSDs )
Each data nodes has configuraions:
  - CPU: 24 processor * Core Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz
  - RAM: 32GB
  - Disk: 14*4TB
(14disks *4TB *6 data nodes= 84 OSDs)

To optimize Ceph Cluster, I adjusted some kernel arguments (nr_request in queue and increated read throughput):

#Adjust nr_request in queue (staying in mem - default is 128)
    echo 1024 > /sys/block/sdb/queue/nr_requests
    echo noop > /sys/block/sda/queue/scheduler   (default= noop deadline [cfq])
#Increase read throughput  (default: 128)
    echo "512" > /sys/block/*/queue/read_ahead_kb

And, tuning Ceph configuraion options below:

[client]
 rbd cache = true
 rbd cache size = 536870912
 rbd cache max dirty = 134217728
 rbd cache target dirty = 33554432
 rbd cache max dirty age = 5
[osd]
    osd data = "">     osd journal = /var/lib/ceph/osd/cloud-$id/journal
    osd journal size = 10000
    osd mkfs type = xfs
    osd mkfs options xfs = "-f -i size=2048"
    osd mount options xfs = "rw,noatime,inode64,logbsize=250k"

    keyring = /var/lib/ceph/osd/cloud-$id/keyring.osd.$id
#increasing the number may increase the request processing rate
    osd op threads = 24
#The number of disk threads, which are used to perform background disk intensive OSD operations such as scrubbing and snap trimming
    osd disk threads =24
#The number of active recovery requests per OSD at one time. More requests will accelerate recovery, but the requests places an increased load on the cluster.
    osd recovery max active =1
#writing direct to the journal.
#Allow use of libaio to do asynchronous writes
    journal dio = true
    journal aio = true
#Synchronization interval:
#The maximum/minimum interval in seconds for synchronizing the filestore.
    filestore max sync interval = 100
    filestore min sync interval = 50
#Defines the maximum number of in progress operations the file store accepts before blocking on queuing new operations.
    filestore queue max ops = 2000
#The maximum number of bytes for an operation
    filestore queue max bytes = 536870912
#The maximum number of operations the filestore can commit.
    filestore queue committing max ops = 2000 (default =500)
#The maximum number of bytes the filestore can commit.
    filestore queue committing max bytes = 536870912
#When you add or remove Ceph OSD Daemons to a cluster, the CRUSH algorithm will want to rebalance the cluster by moving placement groups to or from Ceph OSD Daemons to restore the balance. The process of migrating placement groups and the objects they contain can reduce the cluster’s operational performance considerably. To maintain operational performance, Ceph performs this migration with ‘backfilling’, which allows Ceph to set backfill operations to a lower priority than requests to read or write data.
    osd max backfills = 1


Tomorrow, I'm going to implement Ceph Cluster,
I have very little experience in managing Ceph. So, I hope someone give me advices about above arguments and guide me how to best optimize ceph cluster?

Thank you so much!
--tuantaba



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux