Re: PGs and OSDs unknown

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Den fre 1 apr. 2022 kl 11:15 skrev Konold, Martin <martin.konold@xxxxxxxxxx>:
> Hi,
> running Ceph 16.2.7 on a pure NVME Cluster with 9 nodes I am
> experiencing "Reduced data availability: 448 pgs inactive".
>
> I cannot see any statistics or pool information with "ceph -s".

Since the cluster seems operational, chances are high the MGR(s) are
just stuck, try failing over and/or restart mgr and see if that
doesn't fix it.

> The RBDs are still operational and "ceph report" shows the osds as
> expected.
>


-- 
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux