Re: Complete failure

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 1 Feb 2017, Boris Yakovich wrote:
> Hi Guys,
> 
> I have a three node test environment I have been playing with. 
> Each node has two hard drives one runs the OS, the second ceph data.
> One node had a hard disk failure on the OS disk. I reinstalled the OS (Ubuntu 14),
> And tried to add it back into the cluster. Now the entire cluster is failing.
> I'm pretty new to this and completely lost how to get it back up.
> Ceph -s just shows lines and lines of fault errors. It seems all the monitors are down. 

If you reinstalled the OS you are probably everything in /etc/ceph 
(ceph.conf and the keyring files).

s
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux