inconsistent pgs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> 
> You can look at which OSDs the PGs map to. If the PGs have
> insufficient replica counts they'll report as degraded in "ceph -s" or
> "ceph -w".

I meant in a general sense. If I have a pg that I suspect might be insufficiently redundant I can look that up, but I'd like to know in advance any pgs that do not have the required spread across osds and nodes.

Ideally the crush map would ensure the highest level of redundancy, right? no pg should be replicated to the same osd. If there are osd's on other nodes that have sufficient capacity then no pg should be replicated to an osd in the same node. Probably the same for other levels in the hierarchy (rack, etc) too. Is there a health check I can run that can tell me that my cluster is all as it should be?

Thanks

James


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux