Re: The ceph balancer sets upmap items which violates my crushrule

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Manuel,

Take a look at this tracker in which I was initially confused by something
similar.

https://tracker.ceph.com/issues/47361

In my case it was a mistake in our crush tree.
So please check if something similar applies, otherwise I suggest to open a
new bug with all the details.

Cheers, Dan



On Mon, Dec 14, 2020, 1:49 PM Manuel Lausch <manuel.lausch@xxxxxxxx> wrote:

> The ceph balancer sets upmap items which violates my crushrule
>
> the rule:
>
> rule cslivebapfirst {
>     id 0
>     type replicated
>     min_size 2
>     max_size 4
>     step take csliveeubap-u01dc
>     step chooseleaf firstn 2 type room
>     step emit
>     step take csliveeubs-u01dc
>     step chooseleaf firstn 2 type room
>     step emit
> }
>
> So my intention is, that the first two replicas are stored in the
> datacenter „csliveeubap-u01dc“ and the next two replicas are stored in
> the datacenter „csliveeubs-u01dc“
>
> The cluster has 49152 PGs and 665 of them has at least 3 replicas in
> one datacenter which is not expected!
>
> One example on PG 3.96e
> The acting OSDs are in this order:
> 504 -> DC: csliveeubap-u01dc, room: csliveeubap-u01r03
> 1968 -> DC: csliveeubap-u01dc, room: csliveeubap-u01r01
> 420 -> DC: csliveeubap-u01dc, room: csliveeubap-u01r02
> 1945 -> DC: csliveeubs-u01dc, room: csliveeubs-u01r01
>
> This PG has one upmap item:
> ceph osd dump | grep
> 3.96e pg_upmap_items 3.96e [2013,420]
>
> OSD 2013 is in the DC: csliveeubs-u01dc
>
> I checked this by hand with ceph osd pg-upmap-item
> If I try to set two relicas in one room I will get a appropriate error
> in the mon log and nothing happens. But setting it to a other dc worked
> unfortunately.
>
>
> I would suggest this is a ugly bug. What do you think?
>
> ceph version 14.2.11 (f7fdb2f52131f54b891a2ec99d8205561242cdaf)
> nautilus (stable)
>
>
> Manuel
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux