Re: trouble with ceph-deploy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



<<<<Can you share the output of the commands that do not work for you? How <<<<did `create` not work ? what did you see in the logs?

 

In logs everything looks good. After

ceph-deploy disk zap ceph001:sdaa ceph001:sda1

and

ceph-deploy osd create ceph001:sdaa:/dev/sda1

where:

HOST: ceph001

DISK: sdaa

JOURNAL: /dev/sda1

in log:

==============================================

cat ceph.log

2013-08-30 13:06:42,030 [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks ceph001:/dev/sdaa:/dev/sda1

2013-08-30 13:06:42,590 [ceph_deploy.osd][DEBUG ] Deploying osd to ceph001

2013-08-30 13:06:42,627 [ceph_deploy.osd][DEBUG ] Host ceph001 is now ready for osd use.

2013-08-30 13:06:42,627 [ceph_deploy.osd][DEBUG ] Preparing host ceph001 disk /dev/sdaa journal /dev/sda1 activate True

+++++++++++++++++++++++++++++++++++++++++++++++

But:

+++++++++++++++++++++++++++++++++++++++++++++++

ceph -k ceph.client.admin.keyring -s

  cluster 0a2e18d2-fd53-4f01-b63a-84851576c076

   health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no osds

   monmap e1: 1 mons at {ceph001=172.16.4.32:6789/0}, election epoch 2, quorum 0 ceph001

   osdmap e1: 0 osds: 0 up, 0 in

    pgmap v2: 192 pgs: 192 creating; 0 bytes data, 0 KB used, 0 KB / 0 KB avail

   mdsmap e1: 0/0/1 up

+++++++++++++++++++++++++++++++++++++++++++++++

And

+++++++++++++++++++++++++++++++++++++++++++++++

ceph -k ceph.client.admin.keyring osd tree

# id    weight  type name       up/down reweight

-1      0       root default

+++++++++++++++++++++++++++++++++++++++++++++++

OSD not created (

 

From: Alfredo Deza [mailto:alfredo.deza@xxxxxxxxxxx]
Sent: Thursday, August 29, 2013 5:41 PM
To: Pavel Timoschenkov
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re: trouble with ceph-deploy

 

 

 

On Thu, Aug 29, 2013 at 10:23 AM, Pavel Timoschenkov <Pavel@xxxxxxxxxxxxxxxxxxxxxxxxxxx> wrote:

Hi.

If I use the example of the doc: http://ceph.com/docs/master/rados/deployment/ceph-deploy-osd/#create-osds

ceph-deploy osd prepare ceph001:sdaa:/dev/sda1
ceph-deploy osd activate ceph001:sdaa:/dev/sda1
or
ceph-deploy osd prepare ceph001:/dev/sdaa1:/dev/sda1
ceph-deploy osd activate ceph001:/dev/sdaa:/dev/sda1

or

ceph-deploy osd create ceph001:sdaa:/dev/sda1

OSD is not created. No errors, but when I execute

ceph -k ceph.client.admin.keyring –s

I see the following:

cluster 4b91a9e9-0e6c-4570-98c6-1398c6900a9e
   health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no osds
   monmap e1: 1 mons at {ceph001=172.16.4.32:6789/0}, election epoch 2, quorum 0 ceph001
   osdmap e1: 0 osds: 0 up, 0 in
    pgmap v2: 192 pgs: 192 creating; 0 bytes data, 0 KB used, 0 KB / 0 KB avail
   mdsmap e1: 0/0/1 up

 

0 OSD.

 

But if I use as an DISK argument to a local folder (/var/lib/ceph/osd/osd001) - it works, but only if used prepare + activate construction:

ceph-deploy osd prepare ceph001:/var/lib/ceph/osd/osd001:/dev/sda1
ceph-deploy osd activate ceph001:/var/lib/ceph/osd/osd001:/dev/sda1

If I use CREATE, OSD is not created also.

 

 

From: Alfredo Deza [mailto:alfredo.deza@xxxxxxxxxxx]
Sent: Thursday, August 29, 2013 4:36 PM
To: Pavel Timoschenkov
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re: trouble with ceph-deploy

 

 

 

On Thu, Aug 29, 2013 at 8:00 AM, Pavel Timoschenkov <Pavel@xxxxxxxxxxxxxxxxxxxxxxxxxxx> wrote:

Hi.
New trouble with ceph-deploy. When i'm executing:

ceph-deploy osd prepare ceph001:sdaa:/dev/sda1
ceph-deploy osd activate ceph001:sdaa:/dev/sda1
or
ceph-deploy osd prepare ceph001:/dev/sdaa1:/dev/sda1
ceph-deploy osd activate ceph001:/dev/sdaa:/dev/sda1

 

Have you tried with

    ceph-deploy osd create ceph001:sdaa:/dev/sda1

?

`create` should do `prepare` and `activate` for you. Also be mindful that the requirements for the arguments
are that you need to pass something like:

    HOST:DISK[:JOURNAL]

Where JOURNAL is completely optional, this is also detailed here: http://ceph.com/docs/master/rados/deployment/ceph-deploy-osd/#create-osds

Have you followed those instructions to deploy your OSDs ?

 


OSD not created:

ceph -k ceph.client.admin.keyring -s
  cluster 4b91a9e9-0e6c-4570-98c6-1398c6900a9e
   health HEALTH_ERR 192 pgs stuck inactive; 192 pgs stuck unclean; no osds
   monmap e1: 1 mons at {ceph001=172.16.4.32:6789/0}, election epoch 2, quorum 0 ceph001
   osdmap e1: 0 osds: 0 up, 0 in
    pgmap v2: 192 pgs: 192 creating; 0 bytes data, 0 KB used, 0 KB / 0 KB avail
   mdsmap e1: 0/0/1 up

ceph -k ceph.client.admin.keyring osd tree
# id    weight  type name       up/down reweight
-1      0       root default

but if i'm creating folder for ceph data and executing:

ceph-deploy osd prepare ceph001:/var/lib/ceph/osd/osd001:/dev/sda1
ceph-deploy osd activate ceph001:/var/lib/ceph/osd/osd001:/dev/sda1

Those do not look right to me.

 

OSD created:

ceph -k ceph.client.admin.keyring -s
  cluster 4b91a9e9-0e6c-4570-98c6-1398c6900a9e
   health HEALTH_WARN 192 pgs stuck inactive; 192 pgs stuck unclean
   monmap e1: 1 mons at {ceph001=172.16.4.32:6789/0}, election epoch 2, quorum 0 ceph001
   osdmap e5: 1 osds: 1 up, 1 in
    pgmap v6: 192 pgs: 192 creating; 0 bytes data, 0 KB used, 0 KB / 0 KB avail
   mdsmap e1: 0/0/1 up

ceph -k ceph.client.admin.keyring osd tree                   
# id    weight  type name       up/down reweight
-1      0.03999 root default
-2      0.03999         host ceph001
0       0.03999                 osd.0   up      1

This is a bug or should I mount disks for data to some catalog?


and more:
The 'ceph-deploy osd create' construction don't work from me. Only 'prepare&activate'.

 

When you say `create` didn't work for you, how so? What output did you see? Can you share some logs/output?

 

Can you share the output of the commands that do not work for you? How did `create` not work ? what did you see in the logs?


dpkg -s ceph-deploy
Version: 1.2.1-1precise


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

 

 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux