Re: how to recover the osd.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Earlier its created properly after rebooting  host ,mount points are gone due to that ls command not shown earlier but now  I have mounted again now am able to see the same  folder structure

sadhu@ubuntu3:/var/lib/ceph$ ls /var/lib/ceph/osd/ceph-1
activate.monmap  active  ceph_fsid  current  fsid  journal  keyring  magic  ready  store_version  upstart  whoami
sadhu@ubuntu3:/var/lib/ceph$ ls /var/lib/ceph/osd/ceph-0
activate.monmap  active  ceph_fsid  current  fsid  journal  keyring  magic  ready  store_version  upstart  whoami
sadhu@ubuntu3:/var/lib/ceph$ mount


sadhu@ubuntu3:/var/lib/ceph$ ceph osd stat
e31: 2 osds: 2 up, 2 in

still it shows ceph health stat as warning.

sadhu@ubuntu3:/var/lib/ceph$ ceph health
HEALTH_WARN 225 pgs degraded; 676 pgs stuck unclean; recovery 21/124 degraded (16.935%); mds ceph@ubuntu3 is laggy

Thanks
sadhu



-----Original Message-----
From: Mike Dawson [mailto:mike.dawson@xxxxxxxxxxxx] 
Sent: 08 August 2013 22:08
To: Suresh Sadhu
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re:  how to recover the osd.


On 8/8/2013 12:30 PM, Suresh Sadhu wrote:
> Thanks Mike,Please find the output of two commands
>
> sadhu@ubuntu3:~$ ls /var/lib/ceph/osd/ceph-0

^^^ that is a problem. It appears that osd.0 didn't get deployed properly. To see an example of what structure should be there, do:

ls /var/lib/ceph/osd/ceph-1

ceph-0 should be similar to the apparently working ceph-1 on your cluster.

It should look similar to:

#ls /var/lib/ceph/osd/ceph-0
ceph_fsid
current
fsid
keyring
magic
ready
store_version
whoami

- Mike

> sadhu@ubuntu3:~$ cat /etc/ceph/ceph.conf [global] fsid = 
> 593dac9e-ce55-4803-acb4-2d32b4e0d3be
> mon_initial_members = ubuntu3
> mon_host = 10.147.41.3
> #auth_supported = cephx
> auth cluster required = cephx
> auth service required = cephx
> auth client required = cephx
> osd_journal_size = 1024
> filestore_xattr_use_omap = true
>
> -----Original Message-----
> From: Mike Dawson [mailto:mike.dawson@xxxxxxxxxxxx]
> Sent: 08 August 2013 18:50
> To: Suresh Sadhu
> Cc: ceph-users@xxxxxxxxxxxxxx
> Subject: Re:  how to recover the osd.
>
> Looks like you didn't get osd.0 deployed properly. Can you show:
>
> - ls /var/lib/ceph/osd/ceph-0
> - cat /etc/ceph/ceph.conf
>
>
> Thanks,
>
> Mike Dawson
> Co-Founder & Director of Cloud Architecture Cloudapt LLC
> 6330 East 75th Street, Suite 170
> Indianapolis, IN 46250
>
> On 8/8/2013 9:13 AM, Suresh Sadhu wrote:
>> HI,
>>
>> My storage health cluster is warning state , one of the osd is in 
>> down state and even if I try to start the osd it fail to start
>>
>> sadhu@ubuntu3:~$ ceph osd stat
>>
>> e22: 2 osds: 1 up, 1 in
>>
>> sadhu@ubuntu3:~$ ls /var/lib/ceph/osd/
>>
>> ceph-0  ceph-1
>>
>> sadhu@ubuntu3:~$ ceph osd tree
>>
>> # id    weight  type name       up/down reweight
>>
>> -1      0.14    root default
>>
>> -2      0.14            host ubuntu3
>>
>> 0       0.06999                 osd.0   down    0
>>
>> 1       0.06999                 osd.1   up      1
>>
>> sadhu@ubuntu3:~$ sudo /etc/init.d/ceph -a start 0
>>
>> /etc/init.d/ceph: 0. not found (/etc/ceph/ceph.conf defines , 
>> /var/lib/ceph defines )
>>
>> sadhu@ubuntu3:~$ sudo /etc/init.d/ceph -a start osd.0
>>
>> /etc/init.d/ceph: osd.0 not found (/etc/ceph/ceph.conf defines , 
>> /var/lib/ceph defines )
>>
>> Ceph health status in warning mode.
>>
>> pg 4.10 is active+degraded, acting [1]
>>
>> pg 3.17 is active+degraded, acting [1]
>>
>> pg 5.16 is active+degraded, acting [1]
>>
>> pg 4.17 is active+degraded, acting [1]
>>
>> pg 3.10 is active+degraded, acting [1]
>>
>> recovery 62/124 degraded (50.000%)
>>
>> mds.ceph@ubuntu3 at 10.147.41.3:6803/2148 is laggy/unresponsi
>>
>> regards
>>
>> sadhu
>>
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux