Re: Can someone please help me here?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Nov 8, 2013 at 11:04 AM, Trivedi, Narendra
<Narendra.Trivedi@xxxxxxxxxx> wrote:
> Hi Alfredo,
>
>
>
> See the steps I executed below and the weird error I am getting when trying
> to activate OSDs- the last series of error messages are in an infinite loop-
> still printing > 2 days . FYI, /etc/ceph existed on all nodes after
> ceph-deploy install. I checked after doing ceph-deploy install. Do you need
> the ceph.log file?

This looks good enough. I see at least two different ceph-deploy
versions used (1.2.7 and 1.3), have you tried
running this with 1.3.1?

I believe that you *might* be hitting a small issue in 1.3 that just
got fixed and released.
>
>
>
> [ceph@ceph-admin-node-centos-6-4 ~]# mkdir my-cluster
>
> [ceph@ceph-admin-node-centos-6-4 ~]# cd my-cluster/
>
>
>
> [ceph@ceph-admin-node-centos-6-4 ~]# ceph-deploy new
> ceph-node1-mon-centos-6-4
>
>
>
> The above command creates a ceph.conf file with the cluster information in
> it. A log file by the name of ceph.log will also be created
>
>
>
> [ceph@ceph-admin-node-centos-6-4 my-cluster]# ceph-deploy install
> ceph-node1-mon-centos-6-4 ceph-node2-osd0-centos-6-4
> ceph-node3-osd1-centos-6-4
>
>
>
> This will install ceph on all the nodes
>
>
>
> I added a ceph monitor node:
>
>
>
> [ceph@ceph-admin-node-centos-6-4 mycluster]$ ceph-deploy mon create
> ceph-node1-mon-centos-6-4
>
>
>
> Gather keys:
>
>
>
> [ceph@ceph-admin-node-centos-6-4 mycluster]$ ceph-deploy gatherkeys
> ceph-node1-mon-centos-6-4
>
>
>
> After gathering keys, made sure the directory should have - monitoring,
> admin, osd, mds keyrings:
>
>
>
> [ceph@ceph-admin-node-centos-6-4 mycluster]$ ls
>
> ceph.bootstrap-mds.keyring ceph.bootstrap-osd.keyring
> ceph.client.admin.keyring ceph.conf ceph.log ceph.mon.keyring
>
>
>
> Addede two OSDs:
>
>
>
> Ssh;ed to both OSD nodes i.e. ceph-node2-osd0-centos-6-4 and
> ceph-node3-osd1-centos-6-4 and created two directories to be used as Ceph
> OSD daemons:
>
>
>
>
>
> [ceph@ceph-admin-node-centos-6-4 mycluster]$ ssh ceph-node2-osd0-centos-6-4
>
> Last login: Wed Oct 30 10:51:11 2013 from ceph-admin-node-centos-6-4
>
> [ceph@ceph-node2-osd0-centos-6-4 ~]$ sudo mkdir -p /ceph/osd0
>
> [ceph@ceph-node2-osd0-centos-6-4 ~]$ exit
>
> logout
>
> Connection to ceph-node2-osd0-centos-6-4 closed.
>
>
>
> [ceph@ceph-admin-node-centos-6-4 mycluster]$ ssh ceph-node3-osd1-centos-6-4
>
> Last login: Wed Oct 30 10:51:20 2013 from ceph-admin-node-centos-6-4
>
> [ceph@ceph-node3-osd1-centos-6-4 ~]$ sudo mkdir -p /ceph/osd1
>
> [ceph@ceph-node3-osd1-centos-6-4 ~]$ exit
>
> logout
>
> Connection to ceph-node3-osd1-centos-6-4 closed.
>
> [ceph@ceph-admin-node-centos-6-4 mycluster]$
>
>
>
> Used ceph-deploy to prepare the OSDs:
>
>
>
> [ceph@ceph-admin-node-centos-6-4 mycluster]$ ceph-deploy osd prepare
> ceph-node2-osd0-centos-6-4:/ceph/osd0 ceph-node3-osd1-centos-6-4:/ceph/osd1
>
> [ceph_deploy.cli][INFO ] Invoked (1.2.7): /usr/bin/ceph-deploy osd prepare
> ceph-node2-osd0-centos-6-4:/ceph/osd0 ceph-node3-osd1-centos-6-4:/ceph/osd1
>
>
>
> .
>
> .
>
> .
>
>
>
> [ceph-node3-osd1-centos-6-4][INFO ] create mon keyring file
>
> [ceph-node3-osd1-centos-6-4][INFO ] Running command: udevadm trigger
> --subsystem-match=block --action=add
>
> [ceph_deploy.osd][DEBUG ] Preparing host ceph-node3-osd1-centos-6-4 disk
> /ceph/osd1 journal None activate False
>
> [ceph-node3-osd1-centos-6-4][INFO ] Running command: ceph-disk-prepare
> --fs-type xfs --cluster ceph -- /ceph/osd1
>
> [ceph_deploy.osd][DEBUG ] Host ceph-node3-osd1-centos-6-4 is now ready for
> osd use.
>
> [ceph@ceph-admin-node-centos-6-4 mycluster]$
>
>
>
> Finally I activated the osds:
>
>
>
> [ceph@ceph-admin-node-centos-6-4 mycluster]$ ceph-deploy osd activate
> ceph-node2-osd0-centos-6-4:/ceph/osd0 ceph-node3-osd1-centos-6-4:/ceph/osd1
>
> 2013-11-06 14:26:53,373 [ceph_deploy.cli][INFO  ] Invoked (1.3):
> /usr/bin/ceph-deploy osd activate ceph-node2-osd0-centos-6-4:/ceph/osd0
> ceph-node3-osd1-centos-6-4:/ceph/osd1
>
> 2013-11-06 14:26:53,373 [ceph_deploy.osd][DEBUG ] Activating cluster ceph
> disks ceph-node2-osd0-centos-6-4:/ceph/osd0:
> ceph-node3-osd1-centos-6-4:/ceph/osd1:
>
> 2013-11-06 14:26:53,646 [ceph-node2-osd0-centos-6-4][DEBUG ] connected to
> host: ceph-node2-osd0-centos-6-4
>
> 2013-11-06 14:26:53,646 [ceph-node2-osd0-centos-6-4][DEBUG ] detect platform
> information from remote host
>
> 2013-11-06 14:26:53,662 [ceph-node2-osd0-centos-6-4][DEBUG ] detect machine
> type
>
> 2013-11-06 14:26:53,670 [ceph_deploy.osd][INFO  ] Distro info: CentOS 6.4
> Final
>
> 2013-11-06 14:26:53,670 [ceph_deploy.osd][DEBUG ] activating host
> ceph-node2-osd0-centos-6-4 disk /ceph/osd0
>
> 2013-11-06 14:26:53,670 [ceph_deploy.osd][DEBUG ] will use init type:
> sysvinit
>
> 2013-11-06 14:26:53,670 [ceph-node2-osd0-centos-6-4][INFO  ] Running
> command: sudo ceph-disk-activate --mark-init sysvinit --mount /ceph/osd0
>
> 2013-11-06 14:26:53,891 [ceph-node2-osd0-centos-6-4][ERROR ] 2013-11-06
> 14:26:54.835529 7f589c9b7700  0 -- :/1019489 >> 10.12.0.70:6789/0
> pipe(0x7f5898024480 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7f58980246e0).fault
>
> 2013-11-06 14:26:56,914 [ceph-node2-osd0-centos-6-4][ERROR ] 2013-11-06
> 14:26:57.830775 7f589c8b6700  0 -- :/1019489 >> 10.12.0.70:6789/0
> pipe(0x7f588c000c00 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7f588c000e60).fault
>
> 2013-11-06 14:26:59,886 [ceph-node2-osd0-centos-6-4][ERROR ] 2013-11-06
> 14:27:00.831031 7f589c9b7700  0 -- :/1019489 >> 10.12.0.70:6789/0
> pipe(0x7f588c003010 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7f588c003270).fault
>
> 2013-11-06 14:27:03,914 [ceph-node2-osd0-centos-6-4][ERROR ] 2013-11-06
> 14:27:04.831257 7f589c8b6700  0 -- :/1019489 >> 10.12.0.70:6789/0
> pipe(0x7f588c003a70 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7f588c003cd0).fault
>
> 2013-11-06 14:27:06,886 [ceph-node2-osd0-centos-6-4][ERROR ] 2013-11-06
> 14:27:07.831298 7f589c9b7700  0 -- :/1019489 >> 10.12.0.70:6789/0
> pipe(0x7f588c005550 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7f588c0057b0).fault
>
> 2013-11-06 14:27:09,909 [ceph-node2-osd0-centos-6-4][ERROR ] 2013-11-06
> 14:27:10.831239 7f589c8b6700  0 -- :/1019489 >> 10.12.0.70:6789/0 pipe
>
>
>
> -----Original Message-----
> From: Alfredo Deza [mailto:alfredo.deza@xxxxxxxxxxx]
> Sent: Thursday, November 07, 2013 4:05 PM
> To: Trivedi, Narendra
> Cc: ceph-users@xxxxxxxxxxxxxx
> Subject: Re:  Can someone please help me here?
>
>
>
> On Thu, Nov 7, 2013 at 3:25 PM, Trivedi, Narendra
> <Narendra.Trivedi@xxxxxxxxxx> wrote:
>
>> I can't install Ubuntu... I am not sure why would it do on a new install
>> of CentOS. I wanted to try this to if I can take it as RBD/Radosgw backend
>> for OpenStack production but I can't believe it has taken forever to get it
>> running and I am not there yet!
>
>>
>
>
>
>
> This message contains information which may be confidential and/or
> privileged. Unless you are the intended recipient (or authorized to receive
> for the intended recipient), you may not read, use, copy or disclose to
> anyone the message or any information contained in the message. If you have
> received the message in error, please advise the sender by reply e-mail and
> delete the message and any attachment(s) thereto without retaining any
> copies.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux