Re: The cluster do not aware some osd are disappear

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jul 31, 2012 at 6:07 PM,  <Eric_YH_Chen@xxxxxxxxxx> wrote:
> Hi, Josh:
>
> I do not assign the crushmap by myself, I use the default setting.
> And after I reboot the server, I cannot reproduce this situation.
> The heartbeat check works fine when one of the server not available.

If you don't do anything to your crushmap, all yours osds are in a
flat tree, with no understanding of your failure domains. You really
should configure it. (We really should document it better!)

The newer upstart scripts (/etc/init/ceph-osd.conf instead of
/etc/init.d/ceph) at least set the hostname by default, but that still
ignores racks, rooms etc.
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux