Re: Removing OSD - double rebalance?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



1) if you have the original drive that works and just want to replace it then you can just "dd" it over to the new drive and then extend the partition if the new one is larger, this avoids double backfilling in this case
2) if the old drive is dead you should "out" it and at the same time add a new drive

If you reweight the drive then you shuffle all data on it to the rest of the drives on that host (with default crush at least), so you need to have free space to do that safely.
Also, ceph is not that smart to only backfill the data to the new drive locally (even though it could) and the "hashing" algorithm doesn't really guarantee that no other data moves when you switch drives like that.

TL;DR - if you can, deal with the additional load

Jan

> On 02 Dec 2015, at 11:59, Andy Allan <gravitystorm@xxxxxxxxx> wrote:
> 
> On 30 November 2015 at 09:34, Burkhard Linke
> <Burkhard.Linke@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx> wrote:
>> On 11/30/2015 10:08 AM, Carsten Schmitt wrote:
> 
>>> But after entering the last command, the cluster starts rebalancing again.
>>> 
>>> And that I don't understand: Shouldn't be one rebalancing process enough
>>> or am I missing something?
>> 
>> Removing the OSD changes the weight for the host, thus a second rebalance is
>> necessary.
>> 
>> The best practice to remove an OSD involves changing the crush weight to 0.0
>> as first step.
> 
> I found this out the hard way too. It's unfortunate that the
> documentation is, in my mind, not helpful on the order of commands to
> run.
> 
> http://docs.ceph.com/docs/master/rados/operations/add-or-rm-osds/#removing-osds-manual
> 
> Is there any good reason why the documentation recommends this
> double-rebalance approach? Or conversely, any reason not to change the
> documentation so that rebalances only happen once?
> 
> Thanks,
> Andy
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux