Re: [URGENT-HELP] - Ceph rebalancing again after taking OSD out of CRUSH map

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



OK
thx Wido.

Than can we at least update the documentaiton, that will say MAJOR data rebalancing will happen AGAIN, and not 3%, but 37% in my case.
Because, I would never run this during work hours, while clients are hammering VMs...

This reminds me of those tunable changes couple of months ago, when my cluster completely colapsed during data rebalancing...

I don't see any option to contribute to documentation ?

Best




On 2 March 2015 at 16:07, Wido den Hollander <wido@xxxxxxxx> wrote:
On 03/02/2015 03:56 PM, Andrija Panic wrote:
> Hi people,
>
> I had one OSD crash, so the rebalancing happened - all fine (some 3% of the
> data has been moved arround, and rebalanced) and my previous
> recovery/backfill throtling was applied fine and we didnt have a unusable
> cluster.
>
> Now I used the procedure to remove this crashed OSD comletely from the CEPH
> (
> http://ceph.com/docs/master/rados/operations/add-or-rm-osds/#removing-the-osd
> )
>
> and when I used the "ceph osd crush remove osd.0" command, all of a sudden,
> CEPH started to rebalance once again, this time with 37% of the object that
> are "missplaced" and based on the eperience inside VMs, and the Recovery
> RAte in MB/s - I can tell that my throtling of backfilling and recovery is
> not taken into consideration.
>
> Why is this, 37% of all objects again being moved arround, any help, hint,
> explanation greatly appreciated.
>

This has been discussed a couple of times on the list. If you remove a
item from the CRUSHMap, although it has a weight of 0, a rebalance still
happens since the CRUSHMap changes.

> This is CEPH 0.87.0 from CEPH repo of course. 42 OSD total after the crash
> etc.
>
> The throtling that I have applied from before is like folowing:
>
> ceph tell osd.* injectargs '--osd_recovery_max_active 1'
> ceph tell osd.* injectargs '--osd_recovery_op_priority 1'
> ceph tell osd.* injectargs '--osd_max_backfills 1'
>
> Please advise...
> Thanks
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>


--
Wido den Hollander
42on B.V.
Ceph trainer and consultant

Phone: +31 (0)20 700 9902
Skype: contact42on
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--

Andrija Panić
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux