Re: correct way to increase the weight of all OSDs from 1 to 3.64

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The goal should be to increase the weights in unison, which should prevent 
any actual data movement (modulo some rounding error, perhaps).  At the 
moment that can't be done via the CLI, but you can:

 ceph osd getcrusshmap -o /tmp/cm
 crushtool -i /tmp/cm --reweight-item osd.0 3.5 --reweight-item osd.1 3.5 
... -o /tmp/cm2
 crushtool -d /tmp/cm2  # make sure it looks right
 ceph osd setcrushmap -i /tmp/cm2


On Tue, 4 Mar 2014, Udo Lembke wrote:

> Hi all,
> I have startet the ceph-cluster with an weight of 1 for all osd-disks (4TB).
> Later I switched to ceph-deploy and ceph-deploy use normaly an weight of
> 3.64 for this disks, which makes much more sense!
> 
> Now I wan't to change the weight of all 52 osds (on 4 nodes) to 3.64 and
> the question is, how to proceed on an productional cluster?
> 
> Increase all 52 weights in 0.1-steps, wait for an calm system and do it
> again till 3.64 is reached?
> Or modify the crushmap (export, decompile, change, compile, load)
> directly from 1 to 3.64? Is in this case the data accessible?
> Are OSDs with 76% data an problem in this scenario? I have learned, that
> ceph really don't like full disks ;-)
> 
> Any hints?
> 
> 
> Udo
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux