inconsistent pgs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jul 7, 2014 at 4:39 PM, James Harper <james at ejbdigital.com.au> wrote:
>>
>> You can look at which OSDs the PGs map to. If the PGs have
>> insufficient replica counts they'll report as degraded in "ceph -s" or
>> "ceph -w".
>
> I meant in a general sense. If I have a pg that I suspect might be insufficiently redundant I can look that up, but I'd like to know in advance any pgs that do not have the required spread across osds and nodes.

Any PG which is not replicated according to the dictates of the CRUSH
map will also be marked as "degraded". If there are PGs not placed
like that, and they aren't degraded, your CRUSH map isn't set how you
think it is.

I recommend going over ceph.com/docs and looking at all the pages about CRUSH.
-Greg

>
> Ideally the crush map would ensure the highest level of redundancy, right? no pg should be replicated to the same osd. If there are osd's on other nodes that have sufficient capacity then no pg should be replicated to an osd in the same node. Probably the same for other levels in the hierarchy (rack, etc) too. Is there a health check I can run that can tell me that my cluster is all as it should be?
>
> Thanks
>
> James


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux