Re: Redundancy with Ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 05.07.2010 09:55, Christian Baun wrote:
Hi Thomas,

Thanks a lot for the link and your help!
Now, the issue are clear.

The output of "ceph osd dump -o -" said that there shall be replication because max_osd is 3 and the size of all pools is 2. But osd2 was "out down".
The reason: I forgot to start the ceph server at osd2.


you can also use the --allhosts with /etc/init.d/ceph

example:

/etc/init.d/ceph --allhosts start

this starts ceph on all configured hosts with ssh.

- Thomas
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux