Wrong crush map after all OSDs down

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,
I deployed a ceph cluster with jewel in four physical machines.
Three physical machines were used for OSD and each of them had eight
OSDs. The left one was used to serve as a monitor.
At first, everything worked well.
By the reason of test, I stopped all the OSD daemon and double checked
no OSD process running.
After that, I executed ceph -s and got the following output:
 osdmap e164: 24 osds: 7 up, 7 in
No matter how much time elapsed, the output didn't change.

The expected output should be:
osdmap e164: 24 osds: 0 up, 0 in

I think it is the matter of synchronisation between OSDs and monitor
Would you like you explain this strange phenomenon for me.
Thanks a bunch in advance.
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux