Rack awareness with different hardware layouts

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello all, 

At this moment we have a scenario where i would like your opinion on. 

Scenario: 
Currently we have a ceph environment with 1 rack of hardware, this rack contains a couple of OSD nodes with 4T disks. In a few weeks time we will deploy 2 more racks with OSD nodes, these nodes have 6T disks. 

Short overview: 
rack1: 4T OSD
rack2: 6T OSD
rack3: 6T OSD

At this moment we are playing around with the idea to use the CRUSH map to make ceph rack aware and ensure to have data replicated between racks. However from documentation i gathered i found that when you enforce data replication between buckets then your max storage size will be the lowest bucket value. My understanding: enforce the objects (size=3) to be replicated to all racks, the moment the rack with 4T OSD's is full we cannot store data anymore. 

Is this assumption correct?

The current idea we play with: 

- Create 3 rack buckets
- Create 1 room bucket with these 3 racks
- Create a ruleset to create 2 object replica’s for the 2x 6T buckets
- Create a ruleset to create 1 object replica over the room

This would result in 3 replicas of the object. 

Our idea is to have a crush rule with config that looks like: 
device 0 osd.0
device 1 osd.1
device 2 osd.2
device 3 osd.3
device 4 osd.4
device 5 osd.5
device 6 osd.6
device 7 osd.7
device 8 osd.8
device 9 osd.9


      host r01-cn01 {
              id -1
              alg straw
              hash 0
              item osd.0 weight 4.00
      }

      host r01-cn02 {
              id -2
              alg straw
              hash 0
              item osd.1 weight 4.00
      }

      host r01-cn03 {
              id -3
              alg straw
              hash 0
              item osd.3 weight 4.00
      }

      host r02-cn04 {
              id -4
              alg straw
              hash 0
              item osd.4 weight 6.00
      }

      host r02-cn05 {
              id -5
              alg straw
              hash 0
              item osd.5 weight 6.00
      }

      host r02-cn06 {
              id -6
              alg straw
              hash 0
              item osd.6 weight 6.00
      }

      host r03-cn07 {
              id -7
              alg straw
              hash 0
              item osd.7 weight 6.00
      }

      host r03-cn08 {
              id -8
              alg straw
              hash 0
              item osd.8 weight 6.00
      }

      host r03-cn09 {
              id -9
              alg straw
              hash 0
              item osd.9 weight 6.00
      }

      rack r01 {
              id -10
              alg straw
              hash 0
              item r01-cn01 weight 4.00
              item r01-cn02 weight 4.00
              item r01-cn03 weight 4.00
      }

      rack r02 {
              id -11
              alg straw
              hash 0
              item r02-cn04 weight 6.00
              item r02-cn05 weight 6.00
              item r02-cn06 weight 6.00
      }      

      rack r03 {
              id -12
              alg straw
              hash 0
              item r03-cn07 weight 6.00
              item r03-cn08 weight 6.00
              item r03-cn09 weight 6.00
      }

      room 123 {
              id -13
              alg straw
              hash 0
              item r01 weight 12.00
              item r02 weight 18.00
              item r03 weight 18.00
      }

      root 6t {
              id -14
              alg straw
              hash 0
              item r02 weight 18.00
              item r03 weight 18.00
      }

      rule one {
              ruleset 1
              type replicated
              min_size 3
              max_size 3
              step take room
              step chooseleaf firstn 1 type rack
              step emit
              step take 6t
              step chooseleaf firstn 2 type rack
              step emit
      }
Is this the right approach and would this cause limitations in regards of performance or usability? Do you have suggestions? 

Rogier Dikkes
Systeem Programmeur Hadoop & HPC Cloud
SURFsara | Science Park 140 | 1098 XG Amsterdam

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux