Re: change of pool size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Remember also that this will result in a LOT of replication traffic, so you may wish to throttle down backfill/recovery knobs to reduce impact on clients.
Which release are you running?

> You are correct, just change the size property of each pool. Ceph will
> perform all possible operations, just worry about the amount of free
> storage.
> 
> Em sex., 11 de nov. de 2022 às 10:50, Florian Jonas <florian.jonas@xxxxxxx>
> escreveu:
> 
>> Dear all,
>> 
>> we are running a small cluster with about 80TB of storage which evolved
>> over time and increased in complexity and number of users. In the
>> beginning, we had started operating the cluster without any replicas
>> (pool size 1), but due to the size of the project we now wish to offer
>> redundancy in case of failures.
>> 
>> Can anyone offer insights into the procedure to increase the number of
>> replicas? So far from what I gather, I imagine it goes something likes
>> this:
>> 
>> 1. Make sure more than half of the disk space is free
>> 
>> 2. ceph osd pool set {poolname} size 2 (if we want to have one replica),
>> and we do so for each pool (data and metadata)
>> 
>> Can somebody confirm that this is how it works? Is it as simple as
>> changing it for the existing filesystem as described above or do we need
>> to start from scratch? Is there anything we need to watch out for? Will
>> ceph then just start moving things around as needed, similar to the
>> normal rebalancing procedures?
>> 
>> Thank you for any insights,
>> 
>> Florian (a newbie admin trying to improve his ceph knowledge)
>> 
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux