Re: Advice on meaty CRUSH map update

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 12 Jul 2016 12:50:26 +0200 (CEST) Wido den Hollander wrote:

> 
> > Op 12 juli 2016 om 12:35 schreef Simon Murray <simon.murray@xxxxxxxxxxxxxxxxx>:
> > 
> > 
> > Hi all.
> > 
> > I'm about to perform a rather large reorganization of our cluster and
> > thought I'd get some insight from the community before going any further.
> > 
> > The current state we have (logically) is two trees, one for spinning rust,
> > one for SSD.  Chassis are the current failure domain, and are all chassis
> > types are organized under a single rack.
> > 
> > The machines have been physically been relocated across 4 racks and I have
> > re-written the crush map to organize it so the chassis are correctly
> > located in the correct racks.  I intend to also change the rules so that
> > the failure domain is now at the rack level so we can tolerate more severe
> > power and switching failure.
> > 
It might be just me, but all my production gear always has 2 PSUs, hanging
of 2 independent PDUs and of course 2 switches.

The notion of a rack for me is just something that happens to other
people. ^o^

> > Question is how is the best way to do this?
> > 
> > 1) The pragmatist in me says commit the new crush map, let things
> > rebalance, then apply a new rule set to each pool and again let things
> > rebalance.
> > 
> 
> Yes, that would be easy to do. Also, you can always revert to the old situation if you need to.
> 
> > 2) It would however mean much less scheduled maintenance and keep customers
> > happier if I could just do everything as a big bang and do everything at
> > once e.g. rename the existing rule sets to replicated_rack_leaf(_ssd), and
> > change the chooseleaf option to type rack, and hope for the best :)
> > 
> > Is the latter safe or just plain crazy?
> 
> A big change can be easier for Ceph. You simply limit the backfill and recovery to one at a time and let it run.
> 
> You could also create new rulesets which do the mapping in a different way and switch per pool to the new rulesets.
> 
That's what I would most likely do as well.

Christian

> Wido
> 
> > 
> > Si
> > 
> > -- 
> > DataCentred Limited registered in England and Wales no. 05611763
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 


-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux