Re: Crush map & rule

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



(I wrote it freehand, test before applying)
If your goal is to have a replication of 3 on a row and to be able to
switch to the secondary row, then you need 2 roles and you change the crush
rule on the pool side :

rule primary_location {
(...)
   step take primary class ssd
   step chooseleaf firstn 0 type host
   step emit
}

rule secondary_loc {
(...)
  step take secondary ...

If the aim is to make a replica 2 on 2 rows (not recommended) :

rule row_repli {
(...)
  step take default class ssd
  step chooseleaf firstn 0 type row
  step emit
}

If the aim is to distribute replications over the 2 rows (for example 2*2
or 2*3 replica) :

type replicated
step take primary  class ssd
step chooseleaf firstn 2 type host
step emit
step take secondary  class ssd
step chooseleaf firstn 2 type host
step emit

as far as erasure code is concerned, I really don't see what's reasonably
possible on this architecture.
________________________________________________________

Cordialement,

*David CASIER*

________________________________________________________




Le jeu. 9 nov. 2023 à 08:48, Albert Shih <Albert.Shih@xxxxxxxx> a écrit :

> Le 08/11/2023 à 19:29:19+0100, David C. a écrit
> Hi David.
>
> >
> > What would be the number of replicas (in total and on each row) and their
> > distribution on the tree ?
>
> Well “inside” a row that would be 3 in replica mode.
>
> Between row...well two ;-)
>
> Beside to understanding how to write a rule a little more complex than the
> example in the official documentation, they are another purpose and it's
> to try to have
> a protocole for changing the hardware.
>
> For example if «row primary» are only with old bare metal server, and I
> have some new server I put inside the ceph and want to copy everything
> from the “row primary” to “row secondary”.
>
> Regards
>
> >
> >
> > Le mer. 8 nov. 2023 à 18:45, Albert Shih <Albert.Shih@xxxxxxxx> a
> écrit :
> >
> >     Hi everyone,
> >
> >     I'm totally newbie with ceph, so sorry if I'm asking some stupid
> question.
> >
> >     I'm trying to understand how the crush map & rule work, my goal is
> to have
> >     two groups of 3 servers, so I'm using “row” bucket
> >
> >     ID   CLASS  WEIGHT    TYPE NAME                 STATUS  REWEIGHT
> PRI-AFF
> >      -1         59.38367  root default
> >     -15         59.38367      zone City
> >     -17         29.69183          row primary
> >      -3          9.89728              host server1
> >       0    ssd   3.49309                  osd.0         up   1.00000
> 1.00000
> >       1    ssd   1.74660                  osd.1         up   1.00000
> 1.00000
> >       2    ssd   1.74660                  osd.2         up   1.00000
> 1.00000
> >       3    ssd   2.91100                  osd.3         up   1.00000
> 1.00000
> >      -5          9.89728              host server2
> >       4    ssd   1.74660                  osd.4         up   1.00000
> 1.00000
> >       5    ssd   1.74660                  osd.5         up   1.00000
> 1.00000
> >       6    ssd   2.91100                  osd.6         up   1.00000
> 1.00000
> >       7    ssd   3.49309                  osd.7         up   1.00000
> 1.00000
> >      -7          9.89728              host server3
> >       8    ssd   3.49309                  osd.8         up   1.00000
> 1.00000
> >       9    ssd   1.74660                  osd.9         up   1.00000
> 1.00000
> >      10    ssd   2.91100                  osd.10        up   1.00000
> 1.00000
> >      11    ssd   1.74660                  osd.11        up   1.00000
> 1.00000
> >     -19         29.69183          row secondary
> >      -9          9.89728              host server4
> >      12    ssd   1.74660                  osd.12        up   1.00000
> 1.00000
> >      13    ssd   1.74660                  osd.13        up   1.00000
> 1.00000
> >      14    ssd   3.49309                  osd.14        up   1.00000
> 1.00000
> >      15    ssd   2.91100                  osd.15        up   1.00000
> 1.00000
> >     -11          9.89728              host server5
> >      16    ssd   1.74660                  osd.16        up   1.00000
> 1.00000
> >      17    ssd   1.74660                  osd.17        up   1.00000
> 1.00000
> >      18    ssd   3.49309                  osd.18        up   1.00000
> 1.00000
> >      19    ssd   2.91100                  osd.19        up   1.00000
> 1.00000
> >     -13          9.89728              host server6
> >      20    ssd   1.74660                  osd.20        up   1.00000
> 1.00000
> >      21    ssd   1.74660                  osd.21        up   1.00000
> 1.00000
> >      22    ssd   2.91100                  osd.22        up   1.00000
> 1.00000
> >
> >     and I want to create a some rules, first I like to have
> >
> >       a rule «replica» (over host) inside the «row» primary
> >       a rule «erasure» (over host)  inside the «row» primary
> >
> >     but also two crush rule between primary/secondary, meaning I like to
> have a
> >     replica (with only 1 copy of course) of pool from “row” primary to
> >     secondary.
> >
> >     How can I achieve that ?
> >
> >     Regards
> >
> >
> >
> >     --
> >     Albert SHIH 🦫 🐸
> >     mer. 08 nov. 2023 18:37:54 CET
> >     _______________________________________________
> >     ceph-users mailing list -- ceph-users@xxxxxxx
> >     To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >
> --
> Albert SHIH 🦫 🐸
> Observatoire de Paris
> France
> Heure locale/Local time:
> jeu. 09 nov. 2023 08:39:41 CET
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux