Re: what's the minimum m to keep cluster functioning when 2 OSDs are down?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Will k=4 and m=2 achieve what I want? I know no data loss with m=2
when 2 OSDs are down, but is the cluster still functioning/writable in that case?

the default min_size for EC pools is k+1, so no, in above case your pool would become inactive.

Or m=3 is required to keep cluster functioning while 2 OSDs are down?

I would recommend that, yes. You could decrease min_size to k with m=2, of course, but it's a safety risk which should only be considered in the worst case.

In case m=3, what would be the reasonable k?
Overhead 1.5 or 1.6 is efficient enough for me. So, k=5 or k=6?

I would go with k=6, you get a little more storage efficiency out of it compared to k5m3.

Regards,
Eugen

Zitat von Tony Liu <tonyliu0592@xxxxxxxxxxx>:

Hi,

With Reef, I'd like to keep the cluster functioning while 2 OSDs are down.
The cluster has more than 10 nodes, each of which has 64x 2.5G threads,
256GB memory, 24x NVMe drives and 2x 25GB networking.

Will k=4 and m=2 achieve what I want? I know no data loss with m=2
when 2 OSDs are down, but is the cluster still functioning/writable in that case?
Or m=3 is required to keep cluster functioning while 2 OSDs are down?

In case m=3, what would be the reasonable k?
Overhead 1.5 or 1.6 is efficient enough for me. So, k=5 or k=6?

Any comments is welcome.


Thanks!
Tony
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux