How many nodes/OSD can fail

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

I'm still very new to Ceph. I've created a small test Cluster.

 

ceph-node1

osd0

osd1

osd2

ceph-node2

osd3

osd4

osd5

ceph-node3

osd6

osd7

osd8

 

My pool for CephFS has a replication count of 3. I've powered of 2 nodes(6 OSDs went down) and my cluster status became critical and my ceph clients(cephfs) run into a timeout. My data(I had only one file on my pool) was still on one of the active OSDs. Is this the expected behaviour that the Cluster status became critical and my Clients run into a timeout?

 

Many thanks for your feedback.

 

Regards - Willi

 


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux