You just need to change your rule from step chooseleaf firstn 0 type osd to step chooseleaf firstn 0 type host There will be data movement as it will want to move about half the objects to the new host. There will be data generation as you move from size 1 to size 2. As far as I know a deep scrub won't happen until the next scheduled time. The time to do all of this is dependent on your disk speed, network speed, CPU and RAM capacity as well as the number of backfill processes configured, the priority of the backfill process, how active your disks are and how much data you have stored in the cluster. In short ... it depends. On Mon, Mar 23, 2015 at 4:30 PM, Georgios Dimitrakakis <giorgis@xxxxxxxxxxxx> wrote: > Hi all! > > I had a CEPH Cluster with 10x OSDs all of them in one node. > > Since the cluster was built from the beginning with just one OSDs node the > crushmap had as a default > the replication to be on OSDs. > > Here is the relevant part from my crushmap: > > > # rules > rule replicated_ruleset { > ruleset 0 > type replicated > min_size 1 > max_size 10 > step take default > step chooseleaf firstn 0 type osd > step emit > } > > # end crush map > > > I have added a new node with 10x more identical OSDs thus the total OSDs > nodes are now two. > > I have changed the replication factor to be 2 on all pools and I would like > to make sure that > I always keep each copy on a different node. > > In order to do so do I have to change the CRUSH map? > > Which part should I change? > > > After modifying the CRUSH map what procedure will take place before the > cluster is ready again? > > Is it going to start re-balancing and moving data around? Will a deep-scrub > follow? > > Does the time of the procedure depends on anything else except the amount of > data and the available connection (bandwidth)? > > > Looking forward for your answers! > > > All the best, > > > George > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com