Hi Matt, Can you post your ceph config? Once you startup your ceph cluster, you see that linuscs92 is the standby and linuscs95 is the active? How are you starting your cluster?
service ceph -a start and yes linuscs95 comes out as active. [global] ; enable secure authentication ; auth supported = cephx ; log_to_syslog = true ; keyring = /etc/ceph/keyring.bin ; monitors ; You need at least one. You need at least three if you want to ; tolerate any node failures. Always create an odd number. journal dio = true osd op threads = 24 osd disk threads = 24 filestore op threads = 6 filestore queue max ops = 24 osd client message size cap = 14000000 ms dispatch throttle bytes = 17500000 [mon] mon data = /vol/disk2/data/mon$id ; some minimal logging (just message traffic) to aid debugging ; debug ms = 1 [mon.0] host = linuscs92 mon addr = 10.0.30.10:6789 #[mon.1] # host = linuscs93 # mon addr = 10.0.30.11:6789 [mon.1] host = linuscs95 mon addr = 10.0.30.13:6789 ; mds ; You need at least one. Define two to get a standby. [mds] ; where the mds keeps its secret encryption keys ; keyring = /etc/ceph/keyring.$name [mds.linuscs92] host = linuscs92 [mds.linuscs95] host = linuscs95 mds standby replay = true mds standby for name = linuscs92 #[mds.linuscs94] # host = linuscs94 ; osd ; You need at least one. Two if you want data to be replicated. ; Define as many as you like. [osd] osd journal size = 1024 ; keyring = /etc/ceph/keyring.$name [osd.0] host = linuscs92 osd data = /vol/disk2/data/osd$id osd journal = /vol/disk1/data/osd$id/journal [osd.1] host = linuscs93 osd data = /vol/disk2/data/osd$id osd journal = /vol/disk1/data/osd$id/journal [osd.2] host = linuscs94 osd data = /vol/disk2/data/osd$id osd journal = /vol/disk1/data/osd$id/journal [osd.3] host = linuscs95 osd data = /vol/disk2/data/osd$id osd journal = /vol/disk1/data/osd$id/journal [osd.4] host = linuscs96 osd data = /vol/disk2/data/osd$id osd journal = /vol/disk1/data/osd$id/journal [osd.5] host = linuscs97 osd data = /vol/disk2/data/osd$id osd journal = /vol/disk1/data/osd$id/journal -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html