Re: ceph-osd fail to be started

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Ronny,

After the disks are activated, the OSDs get recovered. Thanks for your info.

Thanks,
Strony


On Tuesday, September 13, 2016 1:00 AM, Ronny Aasen <ronny+ceph-users@xxxxxxxx> wrote:


On 13. sep. 2016 07:10, strony zhang wrote:

> Hi,
>
> My ceph cluster include 5 OSDs. 3 osds are installed in the host
> 'strony-tc' and 2 are in the host 'strony-pc'. Recently, both of hosts
> were rebooted due to power cycles. After all of disks are mounted again,
> the ceph-osd are in the 'down' status. I tried cmd, "sudo start ceph-osd
> id=x', to start the OSDs. But they are not started well with the error
> below reported in the 'dmesg' output. Any suggestions about how to make
> the OSDs started well? Any comments are appreciated.
>
> "
> [6595400.895147] init: ceph-osd (ceph/1) main process ended, respawning
> [6595400.969346] init: ceph-osd (ceph/1) main process (21990) terminated
> with status 1
> [6595400.969352] init: ceph-osd (ceph/1) respawning too fast, stopped
> "
>
> :~$ ceph osd tree
> ID WEIGHT  TYPE NAME          UP/DOWN REWEIGHT PRIMARY-AFFINITY
> -1 1.09477 root default
> -2 0.61818    host strony-tc
>  0 0.20000        osd.0        down        0          1.00000
>  1 0.21819        osd.1        down        0          1.00000
>  4 0.20000        osd.4          up  1.00000          1.00000
> -3 0.47659    host strony-pc
>  2 0.23830        osd.2        down        0          1.00000
>  3 0.23830        osd.3        down        0          1.00000
>
> :~$ cat /etc/ceph/ceph.conf
> [global]
> fsid = 60638bfd-1eea-46d5-900d-36224475d8aa
> mon_initial_members = strony-tc
> mon_host = 10.132.141.122
> auth_cluster_required = cephx
> auth_service_required = cephx
> auth_client_required = cephx
> osd_pool_default_size = 2
>
> Thanks,
> Strony

>


greetings.

I have somewhat of a similar problem
osd's that are just a single disk start on boot.

but osd's that are software raid md devices does not start automatically
on boot

in order to mount and start them i have to run
ceph-disk-activate /dev/md127p1

where /dev/md127p1 is the xfs partition for the osd.

good luck
Ronny Aasen
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux