Re: 2-Layer CRUSH Map Rule?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> > ceph osd setcrushmap -i /tmp/crush.new
> >
> > Note: If you are overwriting your current rule, your data will need to
> > rebalance as soon as your set the crushmap, close to 100% of your
> > objects will move. If you create a new rule, you can set your pool to
> > use the new pool id anytime you are ready.

And if you have set the "norebalance" flag just before this and have a
ceph cluster which uses the upmap balancer, you can use one of the
pgremap*) tools to temporarily tell the cluster that all the misplaced
PGs are actually at the "correct" place for now, then unset
norebalance, and then the balancer will take some ~8 PGs at a time and
correct their placements until the whole pool is placed according to
the new rules. Good if you don't want movement to impact client IO,
but if your cluster is new and not full with data, this might not be
needed.
Still, a good way to slowly move things around without impact, also
when adding or removing OSDs.

 *)
https://github.com/HeinleinSupport/cern-ceph-scripts/blob/master/tools/upmap/upmap-remapped.py
 (python)
or
https://github.com/digitalocean/pgremapper (golang)


-- 
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux