Re: Antw: Re: Best practices for extending a ceph cluster with minimal client impact data movement

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Op 25 augustus 2016 om 12:14 schreef Steffen Weißgerber <WeissgerberS@xxxxxxx>:
> 
> 
> 
> 
> 
> Hi,
> 
> 
> >>> Wido den Hollander <wido@xxxxxxxx> schrieb am Dienstag, 9. August 2016 um
> 10:05:
> 
> >> Op 8 augustus 2016 om 16:45 schreef Martin Palma <martin@xxxxxxxx>:
> >> 
> >> 
> >> Hi all,
> >> 
> >> we are in the process of expanding our cluster and I would like to
> >> know if there are some best practices in doing so.
> >> 
> >> Our current cluster is composted as follows:
> >> - 195 OSDs (14 Storage Nodes)
> >> - 3 Monitors
> >> - Total capacity 620 TB
> >> - Used 360 TB
> >> 
> >> We will expand the cluster by other 14 Storage Nodes and 2 Monitor
> >> nodes. So we are doubling the current deployment:
> >> 
> >> - OSDs: 195 --> 390
> >> - Total capacity: 620 TB --> 1250 TB
> >> 
> >> During the expansion we would like to minimize the client impact and
> >> data movement. Any suggestions?
> >> 
> > 
> > There are a few routes you can take, I would suggest that you:
> > 
> > - set max backfills to 1
> > - set max recovery to 1
> > 
> > Now, add the OSDs to the cluster, but NOT to the CRUSHMap.
> > 
> > When all the OSDs are online, inject a new CRUSHMap where you add the new 
> > OSDs to the data placement.
> > 
> > $ ceph osd setcrushmap -i <new crushmap>
> > 
> > The OSDs will now start to migrate data, but this is throttled by the max 
> > recovery and backfill settings.
> > 
> 
> Would there be a different behaviour of the cluster when adding all new osd's
> with an initial weight of 0 at first and setting the weight for all new osd's in one
> step to the final value?
> 

I think so. Since it will still change the CRUSH topology and might trigger a rebalance.

I haven't tried that route before, so not 100% sure.

Wido

> Steffen
> 
> > Wido
> > 
> >> Best,
> >> Martin
> >> _______________________________________________
> >> ceph-users mailing list
> >> ceph-users@xxxxxxxxxxxxxx 
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx 
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> 
> -- 
> Klinik-Service Neubrandenburg GmbH
> Allendestr. 30, 17036 Neubrandenburg
> Amtsgericht Neubrandenburg, HRB 2457
> Geschaeftsfuehrerin: Gudrun Kappich
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux