Re: Broken Ceph Cluster when adding new one - Proxmox 5.0 & Ceph Luminous

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Le 16/07/2017 à 17:02, Udo Lembke a écrit :
Hi,

On 16.07.2017 15:04, Phil Schwarz wrote:
...
Same result, the OSD is known by the node, but not by the cluster.
...
Firewall? Or missmatch in /etc/hosts or DNS??

Udo

OK,
- No FW,
- No DNS issue at this point.
- Same procedure followed with the last node, except full cluster update before adding new node,new osd.


Only the strange behavior of the 'pveceph createosd' command which
was shown in prevous mail.

...
systemd[1]: ceph-disk@dev-sdc1.service: Main process exited, code=exited, status=1/FAILURE
systemd[1]: Failed to start Ceph disk activation: /dev/sdc1.
systemd[1]: ceph-disk@dev-sdc1.service: Unit entered failed state.
systemd[1]: ceph-disk@dev-sdc1.service: Failed with result 'exit-code'....

What consequences should i encounter when switching /etc/hosts from public_IPs to private_IPs ? ( appart from time travel paradox or blackhole bursting ..)

Thanks.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux