Re: Rebalancing after modifying CRUSH map

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is done automatically. Every time the crush map changes, objects get moved around.

Therefore, a typical procedure is

- make sure ceph is HEALTH_OK
- ceph osd set noout
- ceph osd set norebalance
- edit crush map
- wait for peering to finish, all PGs must be active+clean
- lots of PGs will also be re-mapped
- ceph osd unset norebalance
- ceph osd unset noout

Before doing the last 2 steps, verify that no PGs are incomplete and no objects are degraded. Otherwise, fix first.

Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14

________________________________________
From: Brett Randall <brett.randall@xxxxxxxxx>
Sent: 09 June 2020 07:42:33
To: ceph-users@xxxxxxx
Subject:  Rebalancing after modifying CRUSH map

Hi all


We are looking at implementing Ceph/CephFS for a project. Over time, we may wish to add additional replicas to our cluster. If we modify a CRUSH map, is there a way of then requesting Ceph to re-evaluate the placement of objects across the cluster according to the modified CRUSH map?

Brett
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux