default data pools for cephfs: replicated vs. ec

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



hi there,

recently, we've come across a lot of advice to only use replicated rados
pools as default- (ie: root-) data pools for cephfs¹.

unfortunately, we either skipped or blatantly ignored this advice while
creating our cephfs, so our default data pool is an erasure coded one
with k=2 and m=4, which _should_ be fine availability-wise. could anyone
elaborate on the impacts regarding the performance of the whole setup?

if a migration to a replicated pool is recommend: would a simple

ceph osd pool set $default_data crush_rule $something_replicated

suffice, or would you recommend a more elaborated approach, something
along the lines of taking the cephfs down, copy contents of default_pool
to default_new, rename default_new default_pool, taking the cephfs up again?

thank you very much & with kind regards,
t.

¹ - see, for instance, https://tracker.ceph.com/issues/42450 .

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux