active+recovering+degraded after cluster reboot

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi All

I have what feels like a bit of a rookie question

I shutdown a Luminous 12.2.1 cluster with noout,nobackfill,norecover set

Before shutting down, all PGs were active+clean

I brought the cluster up, all daemons started and all but 2 PGs are active+clean

I have 2 pgs showing: "active+recovering+degraded"

It's been reporting this for about an hour with no signs of clearing on it's own

Ceph health detail shows: PG_DEGRADED Degraded data redundancy: 2/131709267 objects degraded (0.000%), 2 pgs unclean, 2 pgs degraded

I've tried restarting MONs and all OSDs in the cluster.

How would you recommend I proceed at this point?

Thanks
David




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux