Re: [ceph-users] Re: usable size for replicated pool with custom rule in pacific dashboard

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



[ Moved from ceph-users ]

Hey Ernesto,

I happened across this thread while clearing a backlog of ceph-users
emails and it caught my eye. I'm a bit worried about these checks,
particularly as I want to start pushing support for stretch mode
farther up our stack (out of just the CLI commands Rook uses).
The documented rule for generating a stretch mode CRUSH rule is a
little different from this user's rule, but it's quite similar, and
the one he's done here is a plausible alternative.

Is the failure domain check you cite something that's implemented in
the dashboard logic — can you point me at it?
-Greg

On Thu, Sep 9, 2021 at 4:25 AM Ernesto Puerta <epuertat@xxxxxxxxxx> wrote:
>
> Hi Francois,
>
> I'm not an expert on CRUSH rule internals, but I checked the code and
> it assumes that the failure domain (first choose/chooseleaf step) there is
> "room": since there are just 2 rooms vs. 3 replicas, it doesn't allow you
> to create a pool with a rule that might not optimally work (keep in mind
> that Dashboard tries to perform some extra validations compared to the Ceph
> CLI).
>
> Kind Regards,
> Ernesto
>
>
> On Thu, Sep 9, 2021 at 12:29 PM Francois Legrand <fleg@xxxxxxxxxxxxxx>
> wrote:
>
> > Hi all,
> >
> > I have a test ceph cluster with 4 osd servers containing each 3 osds.
> >
> > The crushmap uses 2 rooms with 2 servers in each room.
> >
> > We use replica 3 for pools.
> >
> > I have the following custom crushrule to ensure that I have at least one
> > copy of each data in each room.
> >
> > rule replicated3over2rooms {
> >      id 1
> >      type replicated
> >      min_size 3
> >      max_size 4
> >      step take default
> >      step choose firstn 0 type room
> >      step chooseleaf firstn 2 type host
> >      step emit
> > }
> >
> > Everything was working well in nautilus/centos7 (I could create pools
> > using the dashboard and my custom rule).
> >
> > I upgraded to pacific/ubuntu 20.04 in containers with cephadm.
> >
> > Now, I cannot create a new pool with replicated3over2rooms using the
> > dashboard !
> >
> > If I choose Pool type = replicated, Replicated size = 3, Crush ruleset =
> > replicated3over2rooms
> >
> > The dashboard says :
> >
> > Minimum: 3
> > Maximum: 2
> > The size specified is out of range. A value from 3 to 2 is usable.
> >
> > And inspecting  replicatedover2rooms ruleset in the dashboard says that
> > the parameters are
> >
> > max_size 4
> > min_size 3
> > rule_id 1
> > usable_size 2
> >
> > Where that usable_size comes from ? How to correct it ?
> >
> > If i run the command line
> >
> > ceph osd pool create test 16 replicated   replicated3over2rooms 3
> >
> > it works !!
> >
> > Thanks.
> >
> > F.
> >
> >
> >
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> >
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>

_______________________________________________
Dev mailing list -- dev@xxxxxxx
To unsubscribe send an email to dev-leave@xxxxxxx




[Index of Archives]     [CEPH Users]     [Ceph Devel]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux