Re: ceph -s output

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Check states of PGs using "ceph pg dump" and for every PG that is not "active+clean", issue "ceph pg map <pg_num>" and get mapping OSDs.  Check the state of those OSDs by looking at their logs under /var/log/ceph/.

Regards,
Anand

On Mon, May 23, 2016 at 6:53 AM, Ken Peng <ken@xxxxxxxxxx> wrote:
Hi,

# ceph -s
    cluster 82c855ce-b450-4fba-bcdf-df2e0c958a41
     health HEALTH_ERR
            5 pgs inconsistent
            7 scrub errors
            too many PGs per OSD (318 > max 300)


It's HEALTH_ERR above, how to fix up them? Thanks.



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




--
----------------------------------------------------------------------------
Never say never.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux