Re: [ceph-commit] HEALTH_WARN 192 pgs degraded

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 25/10/12 17:55, Mark Nelson wrote:
On Wed, Oct 24, 2012 at 10:58 PM, Dan Mick <dan.mick@xxxxxxxxxxx> wrote:

  HEALTH_WARN 192 pgs degraded; 192 pgs stuck unclean; recovery 21/42
The other alternative is to just set the pool(s) replication size to 1,
if you are just wanting a single osd for (say) testing:

$ ceph osd pool set <your pool(s)> size 1

I find it I need to restart ceph after doing the above, it then sorts
itself out to a nice healthy status!



I was actually just talking to Greg and Sam about this earlier today.  If
you rely on ceph health as part of an automated process to determine
whether or not tests should start running, having degraded PGs due to some
of the pools expecting 2x replication (when there is 1 OSD) is annoying.
It will go away if whatever default pools are created get manually set to
1x replication, but it's not something that is immediately obvious.  I
don't know that changing the defaults is necessarily the right answer.
Instead perhaps we just haven't done a good enough job of explaining what
pools get created, how they are used, and when/if they should be modified
in some way .  Maybe this belongs in a FAQ?




Ah yes - you are quite right, it is *not* required to restart ceph to make it sort out those stuck pages after changing the size. I believe at some point (maybe < 0.50) it was and I had gotten into the habit!

+1 for adding a FAQ about the defaults pools and replication levels etc.

Cheers

Mark
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux