Re: How to recover degraded objects?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Nov 4, 2011 at 4:23 PM, Henry C Chang <henry.cy.chang@xxxxxxxxx> wrote:
> Hi Atish,
>
> The default replication number of ceph is 2. Thus, If you have only
> one node (osd) in your cluster, all pg/objects are surely in degraded
> state.
>
> As to the problem that you cannot put/get objects, I guess it's
> because of the re-mkcephfs issue Tommi mentioned.
>
> Henry
>
Hello Henry

Thank you for providing further insights. To correct what you pointed
out regarding the default replica counts, I have now done the
following to my ceph single node cluster:
ceph osd pool set data size 1
ceph osd pool set metadata size 1

This should take care of the replica counts staying to 0 I believe.
Correct me if I am mistaken.

Further, while my cluster seems to be up and running and I am able to
do all get/put/setattr etc, I still see some issues with my ceph
health.
===
root@atish-virtual-machine:/etc/ceph# ceph health
2011-11-04 17:46:39.196118 mon <- [health]
2011-11-04 17:46:39.197207 mon0 -> 'HEALTH_WARN 198 pgs degraded' (0)
===

Any insights on this? I am curious about the warning it throws and how
I could correct it.

Regards
Atish

> 2011/11/4 Atish Kathpal <atish.kathpal@xxxxxxxxx>:
>> On Thu, Nov 3, 2011 at 10:38 PM, Tommi Virtanen
>> <tommi.virtanen@xxxxxxxxxxxxx> wrote:
>>> On Wed, Nov 2, 2011 at 23:58, Atish Kathpal <atish.kathpal@xxxxxxxxx> wrote:
>>>> Moreover, I am also unable to create new objects and/or get/put the
>>>> degraded objects. I re-ran mkcephfs after my reboot.
>>>
>>> Well, if you re-ran mkcephfs, that wiped out your old data, so your
>>> earlier question is now moot. Did you shutdown all the daemons first
>>> before mkcephfs? If not, expect them to be broken now. It's hard to
>>> guess what the state of your system is now; perhaps the easiest path
>>> out is to shut down all the daemons, remove all the ceph data ("osd
>>> data" and "mon data" dirs in ceph.conf), re-run mkcephfs, see that
>>> "ceph health" says ok, and then try the "rados" command again.
>>>
>>
>> Thanks for the insights. Yes, I guess I did too many things after the
>> reboot, including re-starting the daemons and the re-running mkcephfs.
>> So from your reply I understand that a system reboot would have done
>> nothing to my RADOS objects, it was the re-running of mkcephfs that
>> degraded my objects. Right?
>>
>> I am able to use "rados" command again, I independently performed some
>> of the steps you have also mentioned. I have understandably lost all
>> my old object though.
>>
>> Thanks
>> Atish
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to majordomo@xxxxxxxxxxxxxxx
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux