backfilling after OSD marked out _and_ OSD removed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
I’m slightly confused about one thing we are observing at the moment. We’re testing the shutdown/removal of OSD servers and noticed twice as much backfilling as expected. This is what we did:

1. service ceph stop on some OSD servers.
2. ceph osd out for the above OSDs (to avoid waiting for the down to out timeout)
— at this point, backfilling begins and finishes successfully after some time.
3. ceph osd rm all of the above OSDs (leaves OSDs in the crush table, marked DNE)
4. ceph osd crush rm for each of the above OSDs 
— step 4 triggers another rebalancing!! despite there not being any data on those OSDs and all PGs being previously healthy.

Is this expected? Is there a way to avoid the 2nd rebalance?

Best Regards,
Dan van der Ster
CERN IT
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux