Re: Questions on CRUSH map

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Konstantin,

Thank you for looking into my question.

I was trying to understand how to set up CRUSH hierarchies and set
rules for different failure domains. I am particularly confused by the
'step take' and 'step choose|chooseleaf' settings for which I think
are the keys for defining a failure domain in a CRUSH rule.

As for my hypothetical cluster, it is made of 3 racks with 2 hosts on
each. One host has 3 SSD-based OSDs and the other has 3 HDD-based
OSDs. I wished to create two rules: one uses SSD-only and another
HDD-only. Both rules should have a rack level failure domain.

I have attached a diagram that may help to explain my setup. The
following is my CRUSH map configuration (with all typos fixed) for
review:

device 0 osd.0 class ssd
device 1 osd.1 class ssd
device 2 osd.2 class ssd
device 3 osd.3 class hdd
device 4 osd.4 class hdd
device 5 osd.5 class hdd
device 6 osd.6 class ssd
device 7 osd.7 class ssd
device 8 osd.8 class ssd
device 9 osd.9 class hdd
device 10 osd.10 class hdd
device 11 osd.11 class hdd
device 12 osd.12 class ssd
device 13 osd.13 class ssd
device 14 osd.14 class ssd
device 15 osd.15 class hdd
device 16 osd.17 class hdd
device 17 osd.17 class hdd

  host a1-1 {
      id -1
      alg straw
      hash 0
      item osd.0 weight 1.00
      item osd.1 weight 1.00
      item osd.2 weight 1.00
  }

  host a1-2 {
      id -2
      alg straw
      hash 0
      item osd.3 weight 1.00
      item osd.4 weight 1.00
      item osd.5 weight 1.00
  }

  host a2-1 {
      id -3
      alg straw
      hash 0
      item osd.6 weight 1.00
      item osd.7 weight 1.00
      item osd.8 weight 1.00
  }

  host a2-2 {
      id -4
      alg straw
      hash 0
      item osd.9 weight 1.00
      item osd.10 weight 1.00
      item osd.11 weight 1.00
  }

  host a3-1 {
      id -5
      alg straw
      hash 0
      item osd.12 weight 1.00
      item osd.13 weight 1.00
      item osd.14 weight 1.00
  }

  host a3-2 {
      id -6
      alg straw
      hash 0
      item osd.15 weight 1.00
      item osd.16 weight 1.00
      item osd.17 weight 1.00
  }

  rack a1 {
      id -7
      alg straw
      hash 0
      item a1-1 weight 3.0
      item a1-2 weight 3.0
  }

  rack a2 {
      id -5
      alg straw
      hash 0
      item a2-1 weight 3.0
      item a2-2 weight 3.0
  }

  rack a3 {
      id -6
      alg straw
      hash 0
      item a3-1 weight 3.0
      item a3-2 weight 3.0
  }

  row a {
      id -7
      alg straw
      hash 0
      item a1 6.0
      item a2 6.0
      item a3 6.0
  }

  rule ssd {
      id 1
      type replicated
      min_size 2
      max_size 11
      step take a class ssd
      step chooseleaf firstn 0 type rack
      step emit
  }

  rule hdd {
      id 2
      type replicated
      min_size 2
      max_size 11
      step take a class hdd
      step chooseleaf firstn 0 type rack
      step emit
  }


Are the two rules correct?

Regards,
Cody
On Sun, Aug 19, 2018 at 11:55 PM Konstantin Shalygin <k0ste@xxxxxxxx> wrote:
>
> > Hi everyone,
> >
> > I am new to Ceph and trying to test out my understanding on the CRUSH
> > map. Attached is a hypothetical cluster diagram with 3 racks. On each
> > rack, the first host runs 3 SSD-based OSDs and the second 3 HDD-based.
> >
> > My goal is to create two rules that separate SSD and HDD performance
> > domains (by using device class) and both rules should use a*rack*
> > level failure domain.
>
>
> Please, provide on human language what actually hosts configuration you
> have and what drive pattern usage you want at finish?
>
> Obviously you want just separate your ssd/hdd load per pool basis, but I
> need more information to clarify.
>
>
>
>
> k

Attachment: ceph-crush-layout.jpg
Description: JPEG image

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux