Re: Reducing cluster size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



To answer my own question, the osd must be first reweighted before setting it as out. So first 'ceph osd crush reweight osd.X 0' then 'ceph osd out X' and proceeding with removing the osd from crushmap and cluster, 

I don't know if this is the normal behaviour but shoudn't the reweighting be done automatically when setting an OSD out?

2016-01-27 12:40 GMT+02:00 Mihai Gheorghe <mcapsali@xxxxxxxxx>:
Hi,

I have a ceph cluster consisting of 4 hosts. Two of them have 3 SSD OSD each and the other two 8 HDD OSD each. I have different crush rules for ssd and hdd. 

Now when  i first made the cluster i only gave one ssd for journaling to all 8 hdd osd on the host. The host has 10 sata ports. One is used for OS, one for journaling and 8 for osd. Now i want to add another journal ssd on the 2 hosts each. So i need to remove one hdd osd from each host,

Following the docs, i set an osd as out and the cluster starts rebalancing data. My problem is that it never achieves active+clean state. I always end up with some pgs stuck unclean. If i bring the osd back in the cluster returns to an active+clean state (with an error of too many pgs per osd 431-max 300). 

I run a mds server aswell and radosgw. 

What could be the problem. How can i shrink the cluster to add 2 more journals?!

Should i restart the mons and osd after rebalancing?

Thank you!

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux