Re: osd out

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



If you are using the default configuration to create the pool (3 replicas), after losing 1 OSD and having 2 left, CRUSH would not be able to find enough OSDs (at least 3) to map the PG thus it would stuck at unclean.


Thanks,
Guang


----------------------------------------
> From: chmind@xxxxxxxxx
> Date: Wed, 12 Aug 2015 19:46:01 +0300
> To: ceph-users@xxxxxxxxxxxxxx
> Subject:  osd out
>
> Hello.
> Could you please help me to remove osd from cluster;
>
> # ceph osd tree
> ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
> -1 0.02998 root default
> -2 0.00999 host ceph1
> 0 0.00999 osd.0 up 1.00000 1.00000
> -3 0.00999 host ceph2
> 1 0.00999 osd.1 up 1.00000 1.00000
> -4 0.00999 host ceph3
> 2 0.00999 osd.2 up 1.00000 1.00000
>
>
> # ceph -s
> cluster 64f87255-d56e-499d-8ebc-65e0f577e0aa
> health HEALTH_OK
> monmap e1: 3 mons at {ceph1=10.0.0.101:6789/0,ceph2=10.0.0.102:6789/0,ceph3=10.0.0.103:6789/0}
> election epoch 10, quorum 0,1,2 ceph1,ceph2,ceph3
> osdmap e76: 3 osds: 3 up, 3 in
> pgmap v328: 128 pgs, 1 pools, 10 bytes data, 1 objects
> 120 MB used, 45926 MB / 46046 MB avail
> 128 active+clean
>
>
> # ceph osd out 0
> marked out osd.0.
>
> # ceph -w
> cluster 64f87255-d56e-499d-8ebc-65e0f577e0aa
> health HEALTH_WARN
> 128 pgs stuck unclean
> recovery 1/3 objects misplaced (33.333%)
> monmap e1: 3 mons at {ceph1=10.0.0.101:6789/0,ceph2=10.0.0.102:6789/0,ceph3=10.0.0.103:6789/0}
> election epoch 10, quorum 0,1,2 ceph1,ceph2,ceph3
> osdmap e79: 3 osds: 3 up, 2 in; 128 remapped pgs
> pgmap v332: 128 pgs, 1 pools, 10 bytes data, 1 objects
> 89120 kB used, 30610 MB / 30697 MB avail
> 1/3 objects misplaced (33.333%)
> 128 active+remapped
>
> 2015-08-12 18:43:12.412286 mon.0 [INF] pgmap v332: 128 pgs: 128 active+remapped; 10 bytes data, 89120 kB used, 30610 MB / 30697 MB avail; 1/3 objects misplaced (33.333%)
> 2015-08-12 18:43:20.362337 mon.0 [INF] HEALTH_WARN; 128 pgs stuck unclean; recovery 1/3 objects misplaced (33.333%)
> 2015-08-12 18:44:15.055825 mon.0 [INF] pgmap v333: 128 pgs: 128 active+remapped; 10 bytes data, 89120 kB used, 30610 MB / 30697 MB avail; 1/3 objects misplaced (33.333%)
>
>
> and it never become active+clean .
> What I’m doing wrong ?
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
 		 	   		  
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux