Re: OSD activation issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Alistair

If i recall correctly my problem ,  i added my monitor manually at this stage and things started working for me.

You should follow http://ceph.com/docs/master/rados/operations/add-or-rm-mons/  and you should crack this problem

If you need my help come to #ceph ( IRC )  , my id is ksingh , we will see together , where the heck is Cool

Regards
Karan Singh


From: "Jurvis LaSalle" <Jurvis.Lasalle@xxxxxxxxxxxxxxxxxxxxx>
To: "alistair whittle" <alistair.whittle@xxxxxxxxxxxx>, ceph-users@xxxxxxxxxxxxxx
Sent: Tuesday, 29 October, 2013 7:01:07 PM
Subject: Re: OSD activation issue

I was able to add a public_network line to the config on the admin host and push the config to the nodes with a "ceph-deploy --overwrite-conf config push rc-ceph-node1 rc-ceph-node2 rc-ceph-node3".  I was able to follow the quickstart after that without further incident.  Rzk had to take additional steps.  Search the list for his fix if mine doesn't help.
 

From: "alistair.whittle@xxxxxxxxxxxx" <alistair.whittle@xxxxxxxxxxxx>
Date: Tuesday, October 29, 2013 11:27 AM
To: "ceph-users@xxxxxxxxxxxxxx" <ceph-users@xxxxxxxxxxxxxx>
Subject: Re: OSD activation issue

Thanks.   It does seem to be working ok and I can create / remove objects it seems without issues.

 

I am however having another problem.   In trying to add additional monitors to my cluster I am getting the following errors (note I did not see this when doing the first and currently only running monitor).   It seems to set it up fine, but then has problems starting it.

 

[ceph_deploy.cli][INFO  ] Invoked (1.2.7): /usr/bin/ceph-deploy mon create ldtdsr02se20

[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts ldtdsr02se20

[ceph_deploy.mon][DEBUG ] detecting platform for host ldtdsr02se20 ...

[ceph_deploy.sudo_pushy][DEBUG ] will use a remote connection with sudo

[ceph_deploy.mon][INFO  ] distro info: RedHatEnterpriseServer 6.4 Santiago

[ldtdsr02se20][DEBUG ] determining if provided host has same hostname in remote

[ldtdsr02se20][DEBUG ] deploying mon to ldtdsr02se20

[ldtdsr02se20][DEBUG ] remote hostname: ldtdsr02se20

[ldtdsr02se20][INFO  ] write cluster configuration to /etc/ceph/{cluster}.conf

[ldtdsr02se20][INFO  ] creating path: /var/lib/ceph/mon/ceph-ldtdsr02se20

[ldtdsr02se20][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ldtdsr02se20/done

[ldtdsr02se20][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-ldtdsr02se20/done

[ldtdsr02se20][INFO  ] creating keyring file: /var/lib/ceph/tmp/ceph-ldtdsr02se20.mon.keyring

[ldtdsr02se20][INFO  ] create the monitor keyring file

[ldtdsr02se20][INFO  ] Running command: ceph-mon --cluster ceph --mkfs -i ldtdsr02se20 --keyring /var/lib/ceph/tmp/ceph-ldtdsr02se20.mon.keyring

[ldtdsr02se20][INFO  ] ceph-mon: set fsid to 148d95d1-a069-491d-8780-1bcbbefe624a

[ldtdsr02se20][INFO  ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-ldtdsr02se20 for mon.ldtdsr02se20

[ldtdsr02se20][INFO  ] unlinking keyring file /var/lib/ceph/tmp/ceph-ldtdsr02se20.mon.keyring

[ldtdsr02se20][INFO  ] create a done file to avoid re-doing the mon deployment

[ldtdsr02se20][INFO  ] create the init path if it does not exist

[ldtdsr02se20][INFO  ] locating `service` executable...

[ldtdsr02se20][INFO  ] found `service` executable: /sbin/service

[ldtdsr02se20][INFO  ] Running command: sudo /sbin/service ceph -c /etc/ceph/ceph.conf start mon.ldtdsr02se20

[ldtdsr02se20][DEBUG ] === mon.ldtdsr02se20 ===

[ldtdsr02se20][DEBUG ] Starting Ceph mon.ldtdsr02se20 on ldtdsr02se20...

[ldtdsr02se20][DEBUG ] failed: 'ulimit -n 32768;  /usr/bin/ceph-mon -i ldtdsr02se20 --pid-file /var/run/ceph/mon.ldtdsr02se20.pid -c /etc/ceph/ceph.conf’'

[ldtdsr02se20][DEBUG ] Starting ceph-create-keys on ldtdsr02se20...

[ldtdsr02se20][WARNIN] No data was received after 7 seconds, disconnecting...

[ldtdsr02se20][INFO  ] Running command: sudo ceph --admin-daemon /var/run/ceph/ceph-mon.ldtdsr02se20.asok mon_status

[ldtdsr02se20][ERROR ] admin_socket: exception getting command descriptions: [Errno 2] No such file or directory

[ldtdsr02se20][WARNIN] monitor: mon.ldtdsr02se20, might not be running yet

[ldtdsr02se20][INFO  ] Running command: sudo ceph --admin-daemon /var/run/ceph/ceph-mon.ldtd

[ldtdsr02se20][ERROR ] admin_socket: exception getting command descriptions: [Errno 2] No such file or directory

[ldtdsr02se20][WARNIN] ldtdsr02se20 is not defined in `mon initial members`

[ldtdsr02se20][WARNIN] monitor ldtdsr02se20 does not exist in monmap

[ldtdsr02se20][WARNIN] neither `public_addr` nor `public_network` keys are defined for monit

[ldtdsr02se20][WARNIN] monitors may not be able to form quorum

 

From: Karan Singh [mailto:ksingh@xxxxxx]
Sent: Tuesday, October 29, 2013 12:13 PM
To: Whittle, Alistair: Investment Bank (LDN)
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re: OSD activation issue

 

Hello Alistar

 

I also faced exactly same issue with one of my  OSD , after OSD Activate , progress got hanged but finally OSD gets added in cluster with no problem.

 

My cluster is running without knows issues as of now. If this is a test setup , you can ignore this , but keep an eye on this.

 

Regards

Karan Singh

 


From: "alistair whittle" <alistair.whittle@xxxxxxxxxxxx>
To: ceph-users@xxxxxxxxxxxxxx
Sent: Tuesday, 29 October, 2013 1:10:00 PM
Subject: OSD activation issue

 

Hello all,

 

I am getting some issues when activating OSD’s on my Red Hat 6.4 Ceph cluster.  I am using the quick start mechanism so mounted a new xfs filesystem  and ran the “osd prepare” command.

 

The prepare seemed to be successful as per the log output below:

 

[ceph_deploy.cli][INFO  ] Invoked (1.2.7): /usr/bin/ceph-deploy osd prepare ldtdsr02se19:/osd2

[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks ldtdsr02se19:/osd2:

[ceph_deploy.sudo_pushy][DEBUG ] will use a remote connection with sudo

[ceph_deploy.osd][INFO  ] Distro info: RedHatEnterpriseServer 6.4 Santiago

[ceph_deploy.osd][DEBUG ] Deploying osd to ldtdsr02se19

[ldtdsr02se19][INFO  ] write cluster configuration to /etc/ceph/{cluster}.conf

[ldtdsr02se19][INFO  ] keyring file does not exist, creating one at: /var/lib/ceph/bootstrap-osd/ceph.keyring

[ldtdsr02se19][INFO  ] create mon keyring file

[ldtdsr02se19][INFO  ] Running command: udevadm trigger --subsystem-match=block --action="">

[ceph_deploy.osd][DEBUG ] Preparing host ldtdsr02se19 disk /osd2 journal None activate False

[ldtdsr02se19][INFO  ] Running command: ceph-disk-prepare --fs-type xfs --cluster ceph -- /osd2

[ceph_deploy.osd][DEBUG ] Host ldtdsr02se19 is now ready for osd use.

 

When I ran the activate, however, I got the following:

 

[ceph_deploy.sudo_pushy][DEBUG ] will use a remote connection with sudo

[ceph_deploy.osd][INFO  ] Distro info: RedHatEnterpriseServer 6.4 Santiago

[ceph_deploy.osd][DEBUG ] activating host ldtdsr02se19 disk /osd2

[ceph_deploy.osd][DEBUG ] will use init type: sysvinit

[ldtdsr02se19][INFO  ] Running command: ceph-disk-activate --mark-init sysvinit --mount /osd2

[ldtdsr02se19][INFO  ] === osd.1 ===

[ldtdsr02se19][INFO  ] Starting Ceph osd.1 on ldtdsr02se19...

[ldtdsr02se19][INFO  ] starting osd.1 at :/0 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal

[ldtdsr02se19][ERROR ] got latest monmap

[ldtdsr02se19][ERROR ] 2013-10-29 10:45:26.373347 7fbaa1b597a0 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway

[ldtdsr02se19][ERROR ] 2013-10-29 10:45:26.403218 7fbaa1b597a0 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway

[ldtdsr02se19][ERROR ] 2013-10-29 10:45:26.405363 7fbaa1b597a0 -1 filestore(/osd2) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory

[ldtdsr02se19][ERROR ] 2013-10-29 10:45:26.461080 7fbaa1b597a0 -1 created object store /osd2 journal /osd2/journal for osd.1 fsid 148d95d1-a069-491d-8780-1bcbbefe624a

[ldtdsr02se19][ERROR ] 2013-10-29 10:45:26.461177 7fbaa1b597a0 -1 auth: error reading file: /osd2/keyring: can't open /osd2/keyring: (2) No such file or directory

[ldtdsr02se19][ERROR ] 2013-10-29 10:45:26.461306 7fbaa1b597a0 -1 created new key in keyring /osd2/keyring

[ldtdsr02se19][ERROR ] added key for osd.1

[ldtdsr02se19][ERROR ] create-or-move updating item name 'osd.1' weight 0.1 at location {host=ldtdsr02se19,root=default} to crush map

 

At this point, ceph-deploy just hung.  After waiting for many minutes, I had to kill it manually.   Looking at the OSD directory though, there is a lot of information that has been added there and when I do a “ceph health” it comes back as “HEALTH_OK”.

 

Is it really OK?

 

Thanks

 

 

_______________________________________________

This message is for information purposes only, it is not a recommendation, advice, offer or solicitation to buy or sell a product or service nor an official confirmation of any transaction. It is directed at persons who are professionals and is not intended for retail customer use. Intended for recipient only. This message is subject to the terms at: www.barclays.com/emaildisclaimer.

For important disclosures, please see: www.barclays.com/salesandtradingdisclaimer regarding market commentary from Barclays Sales and/or Trading, who are active market participants; and in respect of Barclays Research, including disclosures relating to specific issuers, please see http://publicresearch.barclays.com.

_______________________________________________


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux