Check states of PGs using "ceph pg dump" and for every PG that is not "active+clean", issue "ceph pg map <pg_num>" and get mapping OSDs. Check the state of those OSDs by looking at their logs under /var/log/ceph/.
Regards,
Anand
On Mon, May 23, 2016 at 6:53 AM, Ken Peng <ken@xxxxxxxxxx> wrote:
Hi,It's HEALTH_ERR above, how to fix up them? Thanks.
# ceph -s
cluster 82c855ce-b450-4fba-bcdf-df2e0c958a41
health HEALTH_ERR
5 pgs inconsistent
7 scrub errors
too many PGs per OSD (318 > max 300)
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
----------------------------------------------------------------------------
Never say never.
Never say never.
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com