Re: Adding node efficient data move.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> When adding a node and I increment the crush weight like this. I have 
> the most efficient data transfer to the 4th node?
> 
> sudo -u ceph ceph osd crush reweight osd.23 1 
> sudo -u ceph ceph osd crush reweight osd.24 1 
> sudo -u ceph ceph osd crush reweight osd.25 1 
> sudo -u ceph ceph osd crush reweight osd.26 1 
> sudo -u ceph ceph osd crush reweight osd.27 1 
> sudo -u ceph ceph osd crush reweight osd.28 1 
> sudo -u ceph ceph osd crush reweight osd.29 1 
> 
> And then after recovery
> 
> sudo -u ceph ceph osd crush reweight osd.23 2

I'm not sure if you're asking for the most *efficient* way to add capacity, or the least *impactful*.

The most *efficient* would be to have the new OSDs start out at their full CRUSH weight.  This way data only has to move once.  However the overhead of that much movement can be quite significant, especially if I correctly read that you're expanding the size of the cluster by 33%.

What I prefer to do (on replicated clusters) is to use this script:

https://github.com/cernceph/ceph-scripts/blob/master/tools/ceph-gentle-reweight

I set the CRUSH weights to 0 then run the script like

ceph-gentle-reweight -o <list of OSDs> -b 10 -d 0.01 -t 3.48169 -i 10 -r | tee -a /var/tmp/upweight.log

Note that I disable measure_latency() out of paranoia.  This is less *efficient* in that some data ends up being moved more than once, and the elapsed time to complete is longer, but has the advantage of less impact.  It also allows one to quickly stop data movement if a drive/HBA/server/network issue causes difficulties.  Small steps means that each completes quickly.

I also set

osd_max_backfills = 1
osd_recovery_max_active = 1
osd_recovery_op_priority = 1
osd_recovery_max_single_start = 1
osd_scrub_during_recovery = false

to additionally limit the impact of data movement on client operations.

YMMV. 


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux