Re: help

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



In adition to ceph -s, could you provide the output of
ceph osd tree 
and specify what your failure domain is ?

/Heðin


On hós, 2019-08-29 at 13:55 +0200, Janne Johansson wrote:
> 
> 
> Den tors 29 aug. 2019 kl 13:50 skrev Amudhan P <amudhan83@xxxxxxxxx>:
> > Hi,
> > 
> > I am using ceph version 13.2.6 (mimic) on test setup trying with
> > cephfs.
> > my ceph health status showing warning .
> > 
> > "ceph health"
> > HEALTH_WARN Degraded data redundancy: 1197023/7723191 objects
> > degraded (15.499%)
> > 
> > "ceph health detail"
> > HEALTH_WARN Degraded data redundancy: 1197128/7723191 objects
> > degraded (15.500%)
> > PG_DEGRADED Degraded data redundancy: 1197128/7723191 objects
> > degraded (15.500%)
> >     pg 2.0 is stuck undersized for 1076.454929, current state
> > active+undersized+
> >     pg 2.2 is stuck undersized for 1076.456639, current state
> > active+undersized+
> > 
> 
> How does "ceph -s" look?
> It should have more info on what else is wrong.
>  
> -- 
> May the most significant bit of your life be positive.
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux