Re: Question about ceph-balancer and OSD reweights

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



A while ago - before ceph balancer - probably on Jewel
We had a bunch of disks with different re-weights to help control pg
We upgraded to luminous
All our disks are the same, so we set them all back to 1.0 then let them fill accordingly
 
Then ran balancer about 4-5 times, letting each run finish , before then next - worked great - took a while too
Note that when balancer kicks off it can really move a lot of data and involve a lot of objects
 
using it currently to help evacuate and redeploy hosts
 
HTH  Joe

>>> shubjero <shubjero@xxxxxxxxx> 2/28/2020 11:43 AM >>>
I talked to some guys on IRC about going back to the non-1 reweight
OSD's and setting them to 1.

I went from a standard deviation of 2+ to 0.5.

Awesome.

On Wed, Feb 26, 2020 at 10:08 AM shubjero <shubjero@xxxxxxxxx> wrote:
>
> Right, but should I be proactively returning any reweighted OSD's that
> are not 1.0000 to 1.0000?
>
> On Wed, Feb 26, 2020 at 3:36 AM Konstantin Shalygin <k0ste@xxxxxxxx> wrote:
> >
> > On 2/26/20 3:40 AM, shubjero wrote:
> > > I'm running a Ceph Mimic cluster 13.2.6 and we use the ceph-balancer
> > > in upmap mode. This cluster is fairly old and pre-Mimic we used to set
> > > osd reweights to balance the standard deviation of the cluster. Since
> > > moving to Mimic about 9 months ago I enabled the ceph-balancer with
> > > upmap mode and let it do its thing but I did not think about setting
> > > the previously modified reweights back to 1.00000 (not sure if this is
> > > fine or would have been a best practice?)
> > >
> > > Does the ceph-balancer in upmap mode manage the osd reweight
> > > dynamically? Just wondering if I need to proactively go back and set
> > > all non 1.00000 reweights to 1.00000.
> >
> > Balancer in upmap mode should always work on not reweighed (e.g. 1.0000)
> > OSD's.
> >
> >
> >
> > k
> >
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux