Hi, >>> Wido den Hollander <wido@xxxxxxxx> schrieb am Dienstag, 9. August 2016 um 10:05: >> Op 8 augustus 2016 om 16:45 schreef Martin Palma <martin@xxxxxxxx>: >> >> >> Hi all, >> >> we are in the process of expanding our cluster and I would like to >> know if there are some best practices in doing so. >> >> Our current cluster is composted as follows: >> - 195 OSDs (14 Storage Nodes) >> - 3 Monitors >> - Total capacity 620 TB >> - Used 360 TB >> >> We will expand the cluster by other 14 Storage Nodes and 2 Monitor >> nodes. So we are doubling the current deployment: >> >> - OSDs: 195 --> 390 >> - Total capacity: 620 TB --> 1250 TB >> >> During the expansion we would like to minimize the client impact and >> data movement. Any suggestions? >> > > There are a few routes you can take, I would suggest that you: > > - set max backfills to 1 > - set max recovery to 1 > > Now, add the OSDs to the cluster, but NOT to the CRUSHMap. > > When all the OSDs are online, inject a new CRUSHMap where you add the new > OSDs to the data placement. > > $ ceph osd setcrushmap -i <new crushmap> > > The OSDs will now start to migrate data, but this is throttled by the max > recovery and backfill settings. > Would there be a different behaviour of the cluster when adding all new osd's with an initial weight of 0 at first and setting the weight for all new osd's in one step to the final value? Steffen > Wido > >> Best, >> Martin >> _______________________________________________ >> ceph-users mailing list >> ceph-users@xxxxxxxxxxxxxx >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com -- Klinik-Service Neubrandenburg GmbH Allendestr. 30, 17036 Neubrandenburg Amtsgericht Neubrandenburg, HRB 2457 Geschaeftsfuehrerin: Gudrun Kappich _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com