CephFS metadata pool to SSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I found an older ML entry from 2015 and not much else, mostly detailing the doing performance testing to dispel poor performance numbers presented by OP.

Currently have the metadata pool on my slow 24 HDDs, and am curious if I should see any increased performance with CephFS by moving the metadata pool onto SSD medium.
My thought is that the SSDs are lower latency, and it removes those iops from the slower spinning disks.

My next concern would be write amplification on the SSDs. Would this thrash the SSD lifespan with tons of little writes or should it not be too heavy of a workload to matter too much?

My last question from the operations standpoint, if I use:
# ceph osd pool set fs-metadata crush_ruleset <ssd ruleset>
Will this just start to backfill the metadata pool over to the SSDs until it satisfies the crush requirements for size and failure domains and not skip a beat?

Obviously things like enabling dirfrags, and multiple MDS ranks will be more likely to improve performance with CephFS, but the metadata pool uses very little space, and I have the SSDs already, so I figured I would explore it as an option.

Thanks,

Reed
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux