Re: Adding OSD Nodes and Changing Crushmap

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



That is the correct modification to change the failure domain from osd to host.  You can make the change to host from osd in your crush map any time after you add the 2 new storage nodes (It is important to have enough hosts to at least match your cluster's replica size before changing the crush map to host).

Since both actions will cause backfilling and data movement, if you do them close to the same time then you only really move data once.  I would probably set nobackfill and norecover until after all of the hosts are added and then the crush map is updated.


David Turner | Cloud Operations Engineer | StorageCraft Technology Corporation
380 Data Drive Suite 300 | Draper | Utah | 84020
Office: 801.871.2760 | Mobile: 385.224.2943


If you are not the intended recipient of this message or received it erroneously, please notify the sender and delete it, together with any attachments, and be advised that any dissemination or copying of this message is prohibited.



From: ceph-users [ceph-users-bounces@xxxxxxxxxxxxxx] on behalf of Mike Jacobacci [mikej@xxxxxxxxxx]
Sent: Wednesday, October 05, 2016 11:37 AM
To: ceph-users@xxxxxxxx
Subject: [ceph-users] Adding OSD Nodes and Changing Crushmap

Hi,

I just wanted to get a sanity check if possible, I apologize if my questions are stupid, I am still new to Ceph and I am feeling uneasy adding new nodes.

 Right now we have one OSD node with 10 OSD disks (plus 2 disks for caching) and this week we are going to add two more nodes with the same hardware.

I want to change the replication from OSD to Host, do I just need to change the crushmap to the following?

 OLD:
# rules
rule replicated_ruleset {
ruleset 0
type replicated
min_size 1
max_size 10
step take default
step choose firstn 0 type osd
step emit
}

NEW:
# rules
rule replicated_ruleset {
ruleset 0
type replicated
min_size 1
max_size 10
step take default
step chooseleaf firstn 0 type host
step emit
}

My last question:  After adding the new nodes/disks to the cluster, I assume re-balancing will start as soon as they are added... Do I need to wait for the data to rebalance before changing the crushmap to replicate across hosts?

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux