cephfs poor performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Can someone please help me, how do I improve performance on our CephFS cluster?

System in use is: Centos 7.5 with ceph 12.2.7.
The hardware in use are as follows:
3xMON/MGR:
1xIntel(R) Xeon(R) Bronze 3106
16GB RAM
2xSSD for system
1Gbe NIC

2xMDS:
2xIntel(R) Xeon(R) Bronze 3106
64GB RAM
2xSSD for system
10Gbe NIC

6xOSD:
1xIntel(R) Xeon(R) Silver 4108
2xSSD for system
6xHGST HUS726060ALE610 SATA HDD's
1xINTEL SSDSC2BB150G7 for osd db`s (10G partitions) rest for OSD to place cephfs_metadata
10Gbe NIC

pools (default crush rule aware of device class):
rbd with 1024 pg crush rule replicated_hdd
cephfs_data with 256 pg crush rule replicated_hdd
cephfs_metadata with 32 pg crush rule replicated_ssd

test done by fio: fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=random_read_write.fio --bs=4k --iodepth=64 --size=1G --readwrite=randrw --rwmixread=75

shows iops write/read performance as folows:
rbd 3663/1223
cephfs (fuse) 205/68 (wich is a little lower than raw performance of one hdd used in cluster)

Everything is connected to one Cisco 10Gbe switch.
Please help.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux