Re: Best practices for extending a ceph cluster with minimal client impact data movement

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Op 8 augustus 2016 om 16:45 schreef Martin Palma <martin@xxxxxxxx>:
> 
> 
> Hi all,
> 
> we are in the process of expanding our cluster and I would like to
> know if there are some best practices in doing so.
> 
> Our current cluster is composted as follows:
> - 195 OSDs (14 Storage Nodes)
> - 3 Monitors
> - Total capacity 620 TB
> - Used 360 TB
> 
> We will expand the cluster by other 14 Storage Nodes and 2 Monitor
> nodes. So we are doubling the current deployment:
> 
> - OSDs: 195 --> 390
> - Total capacity: 620 TB --> 1250 TB
> 
> During the expansion we would like to minimize the client impact and
> data movement. Any suggestions?
> 

There are a few routes you can take, I would suggest that you:

- set max backfills to 1
- set max recovery to 1

Now, add the OSDs to the cluster, but NOT to the CRUSHMap.

When all the OSDs are online, inject a new CRUSHMap where you add the new OSDs to the data placement.

$ ceph osd setcrushmap -i <new crushmap>

The OSDs will now start to migrate data, but this is throttled by the max recovery and backfill settings.

Wido

> Best,
> Martin
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux