Optimizing for cephfs throughput on a hdd pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hey,

Im running a new ceph 13 cluster, using just one cephfs, 6.3 erasure encoded stripe pool, each osd is 10T hdd, 20 total, all on there own host. Storing mostly large files ~20G. I'm running mostly stock except that I've optimized for the low (2G) memory hosts based an old threads recommendations.

I'm trying to fill it and test various failure scenarios and by far my biggest bottleneck is iops for both writing and recovery. I'm guessing because of the journal write + block write (seeing roughly 30MiB/s for 100iops). SSD for the journal is not possible.

Am I correct in saying that I'm really only able to reduce/influence iops/MiB for the block write? Is the correct way to increase that is to increase the stripe_unit by say 3x to achieve 100MiB/s per osd?

Daniel
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux