Re: ceph OSD down+out =>health ok => remove =>PGsbackfilling... ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On 04/26/2016 12:32 PM, SCHAER Frederic wrote:

Hi,

 

One simple/quick question.

In my ceph cluster, I had a disk wich was in predicted failure. It was so much in predicted failure that the ceph OSD daemon crashed.

 

After the OSD crashed, ceph moved data correctly (or at least that’s what I thought), and a ceph –s was giving a “HEALTH_OK”.

Perfect.

I tride to tell ceph to mark the OSD down : it told me the OSD was already down… fine.

 

Then I ran this :

ID=43 ; ceph osd down $ID ; ceph auth del osd.$ID ; ceph osd rm $ID ; ceph osd crush remove osd.$ID


*snipsnap*

Removing the dead OSD entry changed the CRUSH weight for the host, resulting in a redistribution of the data.

The easiest way to prevent this is setting the CRUSH weight to 0.0 before removing the OSD. This will trigger backfilling only once.

Regards,
Burkhard.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux