Re: How to recover degraded objects?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Nov 4, 2011 at 10:00 PM, Tommi Virtanen
<tommi.virtanen@xxxxxxxxxxxxx> wrote:
> On Fri, Nov 4, 2011 at 05:18, Atish Kathpal <atish.kathpal@xxxxxxxxx> wrote:
>> Thank you for providing further insights. To correct what you pointed
>> out regarding the default replica counts, I have now done the
>> following to my ceph single node cluster:
>> ceph osd pool set data size 1
>> ceph osd pool set metadata size 1
>>
>> This should take care of the replica counts staying to 0 I believe.
>> Correct me if I am mistaken.
>>
>> Further, while my cluster seems to be up and running and I am able to
>> do all get/put/setattr etc, I still see some issues with my ceph
>> health.
>> ===
>> root@atish-virtual-machine:/etc/ceph# ceph health
>> 2011-11-04 17:46:39.196118 mon <- [health]
>> 2011-11-04 17:46:39.197207 mon0 -> 'HEALTH_WARN 198 pgs degraded' (0)
>> ===
>>
>> Any insights on this? I am curious about the warning it throws and how
>> I could correct it.
>
> That is exactly the message you see when Ceph can't reach the desired
> number of copies for some Placement Groups.
>
> You can verify what the desired number of copies is set to (whether
> your "ceph osd pool set ... size 1" worked) by running
>
> ceph osd dump -o -|grep pg_size
>
> Please share that output with the list for further help.
>
> For more, see http://ceph.newdream.net/wiki/Adjusting_replication_level
>

The output you asked for is as follows:-

root@atish-virtual-machine:/etc/ceph# ceph osd dump -o -|grep pg_size
 wrote 716 byte payload to -
pg_pool 0 'data' pg_pool(rep pg_size 1 crush_ruleset 0 object_hash
rjenkins pg_num 64 pgp_num 64 lpg_num 2 lpgp_num 2 last_change 4 owner
0)
pg_pool 1 'metadata' pg_pool(rep pg_size 1 crush_ruleset 1 object_hash
rjenkins pg_num 64 pgp_num 64 lpg_num 2 lpgp_num 2 last_change 5 owner
0)
pg_pool 2 'rbd' pg_pool(rep pg_size 2 crush_ruleset 2 object_hash
rjenkins pg_num 64 pgp_num 64 lpg_num 2 lpgp_num 2 last_change 1 owner
0)

Do let me know.

Thanks
Atish
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux