Re: ceph deploy - new osds - do not mount

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Feb 8, 2014 at 11:37 AM, Manuel Lanazca <manuel81961@xxxxxxxxxxx> wrote:
> Hello Team,
>
> I am  building a new cluster  with cep-deply (emperor).  I successfully
> added 24 osds  from a host, but when I have tried to add others OSDs from
> the next host they do not mount.  The new osds are created but they state
> are down The "ceph-deploy osd create"  command does not show any error or
> ralated. Also I ha tried with "osd activate" Someone can help me please ?
>
> # id    weight  type name       up/down reweight
> -1      6.48    root default
> -2      6.48            host ceph01
> 0       0.27                    osd.0   up      1
> 1       0.27                    osd.1   up      1
> 2       0.27                    osd.2   up      1
> 3       0.27                    osd.3   up      1
> 4       0.27                    osd.4   up      1
> 5       0.27                    osd.5   up      1
> 6       0.27                    osd.6   up      1
> 7       0.27                    osd.7   up      1
> 8       0.27                    osd.8   up      1
> 9       0.27                    osd.9   up      1
> 10      0.27                    osd.10  up      1
> 11      0.27                    osd.11  up      1
> 12      0.27                    osd.12  up      1
> 13      0.27                    osd.13  up      1
> 14      0.27                    osd.14  up      1
> 15      0.27                    osd.15  up      1
> 16      0.27                    osd.16  up      1
> 17      0.27                    osd.17  up      1
> 18      0.27                    osd.18  up      1
> 19      0.27                    osd.19  up      1
> 20      0.27                    osd.20  up      1
> 21      0.27                    osd.21  up      1
> 22      0.27                    osd.22  up      1
> 23      0.27                    osd.23  up      1
> -3      0               host ceph02
>
> 24      0       osd.24  down    0
> 25      0       osd.25  down    0
> 26      0       osd.26  down    0
>
>
>
> admin@ceph01:~/ceph$ ceph-deploy -v osd activate    ceph02:sdd
> [ceph_deploy.cli][INFO  ] Invoked (1.3.4): /usr/bin/ceph-deploy -v osd
> activate ceph02:sdd
> [ceph_deploy.osd][DEBUG ] Activating cluster ceph disks ceph02:/dev/sdd:
> [ceph02][DEBUG ] connected to host: ceph02
> [ceph02][DEBUG ] detect platform information from remote host
> [ceph02][DEBUG ] detect machine type
> [ceph_deploy.osd][INFO  ] Distro info: Ubuntu 12.04 precise
> [ceph_deploy.osd][DEBUG ] activating host ceph02 disk /dev/sdd
> [ceph_deploy.osd][DEBUG ] will use init type: upstart
> [ceph02][INFO  ] Running command: sudo ceph-disk-activate --mark-init
> upstart --mount /dev/sdd
>
>
This looks like a small portion of what the ceph-deploy logs would
say, but if you say there were no
errors in them the next thing I would do is check the ceph logs on the
host that is having issues
getting those OSDs up.
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux