Re: Changing failure domain

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks.
For replica, what is the best way to change crush profile ? Is it to create a new replica profile, and set this profile as crush rulest for the pool (something like ceph osd pool set {pool-name} crush_ruleset my_new_rule) ?

For erasure coding, I would thus have to change the profile at least to k=3, m=3 (for now I only have 6 osd servers). But if I am correct, this cannot be changed for an existing pool and I will have to create a new pool and migrate all data from the current one to the new one. Is that correct ?

F.


Le 28/11/2019 à 17:51, Paul Emmerich a écrit :
Use a crush rule likes this for replica:

1) root default class XXX
2) choose 2 rooms
3) choose 2 disks

That'll get you 4 OSDs in two rooms and the first 3 of these get data,
the fourth will be ignored. That guarantees that losing a room will
lose you at most 2 out of 3 copies. This is for disaster recovery
only, it'll guarantee durability if you lose a room but not
availability.

3+2 erasure coding cannot be split across two rooms in this way
because, well, you need 3 out of 5 shards to survive, so you cannot
lose half of them.

Paul

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux