Unhappy Cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Everyone,
I've been fighting with a ceph cluster that we have recently
physically relocated and lost 2 OSDs during the ensuing power down and
relocation. After powering everything back on we have
             3   incomplete
             3   remapped+incomplete
And indeed we have 2 OSDs that died along the way.
The reason I'm contacting the list is that I'm surprised that these
PGs are incomplete.  We're running Erasure coding with K=4, M=2 which
in my understanding we should be able to lose 2 OSDs without an issue.
Am I mis-understanding this or does m=2 mean you can lose m-1 OSDs?

Also, these two OSDs happened to be in the same server (#3 of 8 total servers).

This is an older cluster running Nautilus 14.2.9.

Any thoughts?
Thanks
-Dave
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux