Re: Best way to change bucket hierarchy

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yes, that makes total sense.

Thanks,

George


> On Jun 4, 2020, at 2:17 AM, Frank Schilder <frans@xxxxxx> wrote:
> 
>> Yes and No. This will cause many CRUSHMap updates where a manual update
>> is only a single change.
>> 
>> I would do:
>> 
>> $ ceph osd getcrushmap -o crushmap
> 
> Well, that's a yes and a no as well.
> 
> If you are experienced and edit crush maps on a regular basis, you can go that way. I would still enclose the change in a norebalance setting. If you are not experienced, you are likely to shoot your cluster. In particular, adding and moving buckets is not fun this way. You need to be careful what IDs you assign, and there are many options to choose from with documentation targeted at experienced cephers.
> 
> CLI commands will prevent a lot of stupid typos, errors and forgotten mandatory lines. I learned that the hard way and decided to use a direct edit only when absolutely necessary. A couple of extra peerings is a low-cost operation compared with trying to find a stupid typo that just killed all pools when angry users stand next to you.
> 
> My recommendation would be to save the original crush map, apply commands and look at changes these commands do. That's a great way to learn how to do it right. And in general, better be safe than sorry.
> 
> Best regards,
> =================
> Frank Schilder
> AIT Risø Campus
> Bygning 109, rum S14
> 
> ________________________________________
> From: Wido den Hollander <wido@xxxxxxxx>
> Sent: 04 June 2020 08:50:16
> To: Frank Schilder; Kyriazis, George; ceph-users
> Subject: Re:  Re: Best way to change bucket hierarchy
> 
> On 6/4/20 12:24 AM, Frank Schilder wrote:
>> You can use the command-line without editing the crush map. Look at the documentation of commands like
>> 
>> ceph osd crush add-bucket ...
>> ceph osd crush move ...
>> 
>> Before starting this, set "ceph osd set norebalance" and unset after you are happy with the crush tree. Let everything peer. You should see misplaced objects and remapped PGs, but no degraded objects or PGs.
>> 
>> Do this only when cluster is helth_ok, otherwise things can get really complicated.
>> 
> 
> Yes and No. This will cause many CRUSHMap updates where a manual update
> is only a single change.
> 
> I would do:
> 
> $ ceph osd getcrushmap -o crushmap
> $ cp crushmap crushmap.backup
> $ crushtool -d crushmap -o crushmap.txt
> $ vi crushmap.txt (now make your changes)
> $ crushtool -c crushmap.txt -o crushmap.new
> $ crushtool -i crushmap.new --tree (check if all OK)
> $ crushtool -i crushmap.new --test --rule 0 --num-rep 3 --show-mappings
> 
> If all is good:
> 
> $ ceph osd setcrushmap -i crushmap.new
> 
> If all goes bad, simply revert to your old crushmap.
> 
> Wido
> 
>> Best regards,
>> =================
>> Frank Schilder
>> AIT Risø Campus
>> Bygning 109, rum S14
>> 
>> ________________________________________
>> From: Kyriazis, George <george.kyriazis@xxxxxxxxx>
>> Sent: 03 June 2020 22:45:11
>> To: ceph-users
>> Subject:  Best way to change bucket hierarchy
>> 
>> Helo,
>> 
>> I have a live ceph cluster, and I’m in the need of modifying the bucket hierarchy.  I am currently using the default crush rule (ie. keep each replica on a different host).  My need is to add a “chassis” level, and keep replicas on a per-chassis level.
>> 
>> From what I read in the documentation, I would have to edit the crush file manually, however this sounds kinda scary for a live cluster.
>> 
>> Are there any “best known methods” to achieve that goal without messing things up?
>> 
>> In my current scenario, I have one host per chassis, and planning on later adding nodes where there would be >1 hosts per chassis. It looks like “in theory” there wouldn’t be a need for any data movement after the crush map changes.  Will reality match theory?  Anything else I need to watch out for?
>> 
>> Thank you!
>> 
>> George
>> 
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>> 

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux