Re: Adding mon manually

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Once I figure out how to get my cluster healthy again after the monitor problem discussed below, then I will try ceph-deploy again and send you the output.

I have been trying to re-inject the last healthy monmap into all the nodes, however this has proved unsuccessful thus far.

-----Original Message-----
From: Alfredo Deza [mailto:alfredo.deza@xxxxxxxxxxx] 
Sent: Wednesday, January 22, 2014 8:15 PM
To: Whittle, Alistair: Investment Bank (LDN)
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re:  Adding mon manually

On Wed, Jan 22, 2014 at 12:47 PM,  <alistair.whittle@xxxxxxxxxxxx> wrote:
> All,
>
>
>
> Having failed to successfully and new monitors using ceph-deploy, I 
> tried the documented manual approach.
>
Would you be able to share why/how it didn't work? Maybe some logs or output would be great so that we can continue to improve the tool
>
>
> The platform:
>
> OS:  RHEL 6.4
>
> Ceph:  Emperor
>
> Ceph-deploy:  1.3.4-0
>
>
>
> When following the procedure on an existing node in a working cluster 
> that has an existing single monitor configured as part of the quick 
> start procedure through ceph-deploy, I get the following error when 
> running the following set of commands on a second node:
>
>
>
> #sudo mkdir /var/lib/ceph/mon/ceph-ldtdsr000000559
>
> #ceph auth get mon. -o /home/ceph/tmp/monkey
>
> # ceph mon getmap -o /home/ceph/tmp/monmap
>
> # sudo ceph-mon -i ldtdsr000000559 --mkfs --monmap 
> /home/ceph/tmp/monmap --keyring /home/ceph/tmp/monkey
>
> ceph-mon: set fsid to 74fdf8eb-fa3a-47ea-afde-593894c86cac
>
> ceph-mon: created monfs at /var/lib/ceph/mon/ceph-ldtdsr000000559 for
> mon.ldtdsr000000559
>
> # ceph mon add ldtdsr000000559 10.123.4.72:6789
>
> 2014-01-22 17:27:41.626839 7f388f5fe700  0 monclient: hunting for new 
> mon
>
>
>
> And there it sits.   Sat for many minutes with no further output.
> Eventually I had to kill the process manually.
>
>
>
> While this command was being run, and subsequently after killing the 
> process, the remaining nodes are now timing out when running "ceph health"
> or "ceph status" commands.   This tells me that the previous command has
> gotten stuck somewhere as has resulted in the remaining monitor not 
> being available.
>
>
>
> Any ideas what is going on here?
>
>
>
> Thanks
>
> Alistair
>
>
>
> _______________________________________________
>
> This message is for information purposes only, it is not a 
> recommendation, advice, offer or solicitation to buy or sell a product 
> or service nor an official confirmation of any transaction. It is 
> directed at persons who are professionals and is not intended for 
> retail customer use. Intended for recipient only. This message is subject to the terms at:
> www.barclays.com/emaildisclaimer.
>
> For important disclosures, please see:
> www.barclays.com/salesandtradingdisclaimer regarding market commentary 
> from Barclays Sales and/or Trading, who are active market 
> participants; and in respect of Barclays Research, including 
> disclosures relating to specific issuers, please see http://publicresearch.barclays.com.
>
> _______________________________________________
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________

This message is for information purposes only, it is not a recommendation, advice, offer or solicitation to buy or sell a product or service nor an official confirmation of any transaction. It is directed at persons who are professionals and is not intended for retail customer use. Intended for recipient only. This message is subject to the terms at: www.barclays.com/emaildisclaimer.

For important disclosures, please see: www.barclays.com/salesandtradingdisclaimer regarding market commentary from Barclays Sales and/or Trading, who are active market participants; and in respect of Barclays Research, including disclosures relating to specific issuers, please see http://publicresearch.barclays.com.

_______________________________________________
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux