change of pool size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear all,

we are running a small cluster with about 80TB of storage which evolved over time and increased in complexity and number of users. In the beginning, we had started operating the cluster without any replicas (pool size 1), but due to the size of the project we now wish to offer redundancy in case of failures.

Can anyone offer insights into the procedure to increase the number of replicas? So far from what I gather, I imagine it goes something likes this:

1. Make sure more than half of the disk space is free

2. ceph osd pool set {poolname} size 2 (if we want to have one replica), and we do so for each pool (data and metadata)

Can somebody confirm that this is how it works? Is it as simple as changing it for the existing filesystem as described above or do we need to start from scratch? Is there anything we need to watch out for? Will ceph then just start moving things around as needed, similar to the normal rebalancing procedures?

Thank you for any insights,

Florian (a newbie admin trying to improve his ceph knowledge)

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux