Re: Pool on limited number of OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Wido

Sure, here is a rule:
ceph osd crush rule dump s3_rule
{
    "rule_id": 1,
    "rule_name": "s3_rule",
    "ruleset": 1,
    "type": 1,
    "min_size": 1,
    "max_size": 10,
    "steps": [
        {
            "op": "take",
            "item": -21,
            "item_name": "default~s3"
        },
        {
            "op": "chooseleaf_firstn",
            "num": 0,
            "type": "host"
        },
        {
            "op": "emit"
        }
    ]
}

and here is a shadow crush map:
-21    s3  7.09189 root default~s3
-20    s3  7.09189     region lab1~s3
-19    s3  7.09189         room cr1.lab1~s3
-18    s3  7.09189             rack sr1.cr1.lab1~s3
-15    s3  3.53830                 host kw01sv09.sr1.cr1.lab1~s3
 11    s3  3.53830                     osd.11
-17    s3  3.53830                 host kw01sv10.sr1.cr1.lab1~s3
 10    s3  3.53830                     osd.10
-16    s3  0.01529                 host kw01sv11.sr1.cr1.lab1~s3
  0    s3  0.01529                     osd.0

Now status is:
25 pgs degraded, 25 pgs undersized
All of them are from the same pool, while this pool is using 32 PGs - so 7
are correctly assigned on [0, 10, 11] while the rest is only on [10, 11]

Jacek

śr., 19 lut 2020 o 07:27 Wido den Hollander <wido@xxxxxxxx> napisał(a):

>
>
> On 2/18/20 6:56 PM, Jacek Suchenia wrote:
> > Hello
> >
> > I have a cluster, (Nautilus 14.2.4) where one pool I'd like to keep on a
> > dedicated OSDs. So I setup a rule that covers *3* dedicated OSDs (using
> > device classes) and assigned it to pool with replication factor *3*. Only
> > 10% PGs were assigned and rebalanced, where rest of them stuck in
> > *undersized* state.
> >
>
> Can you share the rule and some snippets of the CRUSHMap?
>
> Wido
>
> > What mechanism prevents CRUSH algorithm to assign the same set of OSDs to
> > all PGs in a pool? How can I control it?
> >
> > Jacek
> >
>


-- 
Jacek Suchenia
jacek.suchenia@xxxxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux