Re: CephFS metadata pool to SSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



John covered everything better than I was going to, so I'll just remove that from my reply.

If you aren't using DC SSDs and this is prod, then I wouldn't recommend moving towards this model.  However you are correct on how to move the pool to the SSDs from the HDDs and based on how simple and quick it can be for a healthy cluster to do that, you can always let it run for a few weeks and see how it affects the durability of your SSDs before deciding to leave it or go back to your current setup.  

On Thu, Oct 12, 2017 at 4:43 PM Reed Dier <reed.dier@xxxxxxxxxxx> wrote:
I found an older ML entry from 2015 and not much else, mostly detailing the doing performance testing to dispel poor performance numbers presented by OP.

Currently have the metadata pool on my slow 24 HDDs, and am curious if I should see any increased performance with CephFS by moving the metadata pool onto SSD medium.
My thought is that the SSDs are lower latency, and it removes those iops from the slower spinning disks.

My next concern would be write amplification on the SSDs. Would this thrash the SSD lifespan with tons of little writes or should it not be too heavy of a workload to matter too much?

My last question from the operations standpoint, if I use:
# ceph osd pool set fs-metadata crush_ruleset <ssd ruleset>
Will this just start to backfill the metadata pool over to the SSDs until it satisfies the crush requirements for size and failure domains and not skip a beat?

Obviously things like enabling dirfrags, and multiple MDS ranks will be more likely to improve performance with CephFS, but the metadata pool uses very little space, and I have the SSDs already, so I figured I would explore it as an option.

Thanks,

Reed
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux