Re: How to recover degraded objects?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Nov 4, 2011 at 05:18, Atish Kathpal <atish.kathpal@xxxxxxxxx> wrote:
> Thank you for providing further insights. To correct what you pointed
> out regarding the default replica counts, I have now done the
> following to my ceph single node cluster:
> ceph osd pool set data size 1
> ceph osd pool set metadata size 1
>
> This should take care of the replica counts staying to 0 I believe.
> Correct me if I am mistaken.
>
> Further, while my cluster seems to be up and running and I am able to
> do all get/put/setattr etc, I still see some issues with my ceph
> health.
> ===
> root@atish-virtual-machine:/etc/ceph# ceph health
> 2011-11-04 17:46:39.196118 mon <- [health]
> 2011-11-04 17:46:39.197207 mon0 -> 'HEALTH_WARN 198 pgs degraded' (0)
> ===
>
> Any insights on this? I am curious about the warning it throws and how
> I could correct it.

That is exactly the message you see when Ceph can't reach the desired
number of copies for some Placement Groups.

You can verify what the desired number of copies is set to (whether
your "ceph osd pool set ... size 1" worked) by running

ceph osd dump -o -|grep pg_size

Please share that output with the list for further help.

For more, see http://ceph.newdream.net/wiki/Adjusting_replication_level
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux