CRUSH Map Adjustment for Node Replication

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all!

I had a CEPH Cluster with 10x OSDs all of them in one node.

Since the cluster was built from the beginning with just one OSDs node the crushmap had as a default
the replication to be on OSDs.

Here is the relevant part from my crushmap:


# rules
rule replicated_ruleset {
	ruleset 0
	type replicated
	min_size 1
	max_size 10
	step take default
	step chooseleaf firstn 0 type osd
	step emit
}

# end crush map


I have added a new node with 10x more identical OSDs thus the total OSDs nodes are now two.

I have changed the replication factor to be 2 on all pools and I would like to make sure that
I always keep each copy on a different node.

In order to do so do I have to change the CRUSH map?

Which part should I change?


After modifying the CRUSH map what procedure will take place before the cluster is ready again?

Is it going to start re-balancing and moving data around? Will a deep-scrub follow?

Does the time of the procedure depends on anything else except the amount of data and the available connection (bandwidth)?


Looking forward for your answers!


All the best,


George

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux