Re: pol min_size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

if you value your data you shouldn't set it to k (in your case 6). The docs [1] are pretty clear about that:

 min_size
Sets the minimum number of replicas required for I/O. See Set the Number of Object Replicas for further details. In the case of Erasure Coded pools this should be set to a value greater than ‘k’ since if we allow IO at the value ‘k’ there is no redundancy and data will be lost in the event of a permanent OSD failure.

Regards,
Eugen

[1] https://docs.ceph.com/en/latest/rados/operations/pools/#set-pool-values

Zitat von Christopher Durham <caduceus42@xxxxxxx>:

Hello,
I have an ec pool set as 6+2. I have noticed that when rebooting servers during system upgrades, I get pgs set to inactive while the osds are down. I then discovered that my min_size for the pool is set to 7, which makes sense that if I reboot two servers that host a pg with OSDs on both servers, then only 6 of the OSDs are available during the reboot cycle, and with a min_size of 7, they go inactive. Is is ok for me to set min_size on the pool to 6, to avoid the inactive problem? I know I could do my reboots sequentially to eliminate multiple server downtime, but wanted to be sure the min_size 6 is ok. I know this may increase other risks, but wanted to know if this min_size change is an option, albeit more risky. Thanks.

-Chris
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux