OSD removal rebalancing again

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi All,

 

I just removed an OSD from our cluster following the steps on http://ceph.com/docs/master/rados/operations/add-or-rm-osds/

 

First I set the OSD as out,

 

ceph osd out osd.0

 

This emptied the OSD and eventually health of the cluster came back to normal/ok. and OSD was up and out. (took about 2-3 hours) (OSD.0 used space before setting as OUT was 900~ GB after rebalance took place OSD Usage was ~150MB)

 

Once this was all ok I then proceeded to STOP the OSD.

 

service ceph stop osd.0

 

checked cluster health and all looked ok, then I decided to remove the osd using the following commands.

 

ceph osd crush remove osd.0

ceph auth del osd.0

ceph osd rm 0

 

 

Now our cluster says

health HEALTH_WARN 414 pgs backfill; 12 pgs backfilling; 19 pgs recovering; 344 pgs recovery_wait; 789 pgs stuck unclean; recovery 390967/10986568 objects degraded (3.559%)

 

before using the removal procedure everything was “ok” and the osd.0 had been emptied and seemingly rebalanced.

 

Any ideas why its rebalancing again?

 

we’re using Ubuntu 12.04 w/ Ceph 80.8 & Kernel 3.13.0-43-generic #72~precise1-Ubuntu SMP Tue Dec 9 12:14:18 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

 

 

 

Regards,

Quenten Grasso

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux