Re: PG not getting clean

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Karan,

have you tried first to identify which PGs are in this status: ceph pg dump | grep [peering | down+peering | remapped+peering]

This might point you to a specific OSD for all of them or some specific ones. If that’s the case, just make sure you just restart the OSDs for those PGs one after the other ones depending on how may OSDs are holding those PGs.

One other question: Are these real nodes or VMs you are using for a test? Because I had sometimes this behavior after hibernating a VM and restarting it.

JC



On Feb 14, 2014, at 08:58, Karan Singh <karan.singh@xxxxxx> wrote:

> Hello Cephers
> 
> I am struggling with my ceph cluster health  ,  PGS are not getting clean , i waited for recovery process to get end was hoping after recovery PG will become clean , but it didn’t. Can you please share your suggestions.
> 
>    cluster 0ff473d9-0670-42a3-89ff-81bbfb2e676a
>     health HEALTH_WARN 119 pgs down; 303 pgs peering; 303 pgs stuck inactive; 303 pgs stuck unclean; mds cluster is degraded; crush map has no
> n-optimal tunables
>     monmap e3: 3 mons at {ceph-mon1=192.168.1.38:6789/0,ceph-mon2=192.168.1.33:6789/0,ceph-mon3=192.168.1.31:6789/0}, election epoch 4226, quo
> rum 0,1,2 ceph-mon1,ceph-mon2,ceph-mon3
>     mdsmap e8465: 1/1/1 up {0=ceph-mon1=up:replay}
>     osdmap e250466: 10 osds: 10 up, 10 in
>      pgmap v585809: 576 pgs, 6 pools, 101933 MB data, 25453 objects
>            343 GB used, 5423 GB / 5767 GB avail
>                 273 active+clean
>                 108 peering
>                 119 down+peering
>                  76 remapped+peering
> 
> 
> # id	weight	type name	up/down	reweight
> -1	5.65	root default
> -2	0		host ceph-node1
> -3	1.72		host ceph-node2
> 4	0.43			osd.4	up	1
> 5	0.43			osd.5	up	1
> 6	0.43			osd.6	up	1
> 7	0.43			osd.7	up	1
> -4	1.31		host ceph-node4
> 8	0.88			osd.8	up	1
> 1	0.43			osd.1	up	1
> -5	1.31		host ceph-node5
> 9	0.88			osd.9	up	1
> 2	0.43			osd.2	up	1
> -6	0.88		host ceph-node6
> 10	0.88			osd.10	up	1
> -7	0.43		host ceph-node3
> 0	0.43			osd.0	up	1
> 
> 
> 
> Regards
> karan
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux