The error keeps coming back, eventually status changing to OK, then back into errors. I thought it looked like a connectivity issue as well with the "wrongly marked me down", but firewall rules are allowing all traffic on the cluster network. Syslog is being flooded with messages like: Jul 7 10:52:17 ceph1 bash: 2015-07-07 10:52:17.609870 7f2055192700 -1 osd.21 129936 heartbeat_check: no reply from osd.89 ever on either front or back, first ping sent 2015-07-07 10:51:50.995374 (cutoff 2015-07-07 10:51:57.609817) Jul 7 10:52:17 ceph1 bash: 2015-07-07 10:52:17.611302 7f203ba5b700 -1 osd.21 129936 heartbeat_check: no reply from osd.50 ever on either front or back, first ping sent 2015-07-07 10:51:44.691270 (cutoff 2015-07-07 10:51:57.611297) Jul 7 10:52:17 ceph1 bash: 2015-07-07 10:52:17.611309 7f203ba5b700 -1 osd.21 129936 heartbeat_check: no reply from osd.61 ever on either front or back, first ping sent 2015-07-07 10:51:50.995374 (cutoff 2015-07-07 10:51:57.611297) Jul 7 10:52:17 ceph1 bash: 2015-07-07 10:52:17.611315 7f203ba5b700 -1 osd.21 129936 heartbeat_check: no reply from osd.69 ever on either front or back, first ping sent 2015-07-07 10:51:54.998259 (cutoff 2015-07-07 10:51:57.611297) Thats just a small section, but multiple osd's are listed. eventually the logs are rate limited because they're coming in so fast. On Tue, Jul 7, 2015 at 10:13 AM, Abhishek L <abhishek.lekshmanan@xxxxxxxxx> wrote: > > Steve Dainard writes: > >> Hello, >> >> Ceph 0.94.1 >> 2 hosts, Centos 7 >> >> I have two hosts, one which ran out of / disk space which crashed all >> the osd daemons. After cleaning up the OS disk storage and restarting >> ceph on that node, I'm seeing multiple errors, then health OK, then >> back into the errors: >> >> # ceph -w >> http://pastebin.com/mSKwNzYp > > Is the error still consistently happening? (the last lines shows > active+clean) Wild guess, but is it possible some sort of > iptables/firewall rules are preventing communication between the osds? > >> >> Any help is appreciated. >> >> Thanks, >> Steve >> _______________________________________________ >> ceph-users mailing list >> ceph-users@xxxxxxxxxxxxxx >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > -- > Abhishek _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com