Crushmap ruleset for rack aware PG placement

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Amit,

On Mon, 15 Sep 2014, Amit Vijairania wrote:
> Hello!
> 
> In a two (2) rack Ceph cluster, with 15 hosts per rack (10 OSD per
> host / 150 OSDs per rack), is it possible to create a ruleset for a
> pool such that the Primary and Secondary PGs/replicas are placed in
> one rack and Tertiary PG/replica is placed in the other rack?
> 
> root standard {
>   id -1 # do not change unnecessarily
>   # weight 734.400
>   alg straw
>   hash 0 # rjenkins1
>   item rack1 weight 367.200
>   item rack2 weight 367.200
> }
> 
> Given there are only two (2) buckets, but three (3) replica, is it
> even possible?

Yes:

rule myrule {
	ruleset 1
	type replicated
	min_size 1
	max_size 10
	step take default
	step choose firstn 2 type rack
	step chooseleaf firstn 2 type host
	step emit
}

That will give you 4 osds, spread across 2 hosts in each rack.  The pool 
size (replication factor) is 3, so RADOS will just use the first three (2 
hosts in first rack, 1 host in second rack).

sage




> I think following Giant blueprint is trying to address scenario I
> described above.. Is the following blueprint targeted for Giant
> release?
> http://wiki.ceph.com/Planning/Blueprints/Giant/crush_extension_for_more_flexible_object_placement
> 
> 
> Regards,
> Amit Vijairania  |  Cisco Systems, Inc.
> --*--
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo at vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux