Hi Manuel,
We also had a similar problem, that for a two step crush selection rule,
the balancer kept proposing upmaps that were invalid:
step take root-disk
step choose indep 3 type pod
step choose indep 3 type rack
step chooseleaf indep 1 type osd
step emit
https://tracker.ceph.com/issues/45439 ; (see first comment by Josh Durgin)
This was with 14.2.8, not sure if recent improvements in the balancer
fixed cases like this. At the moment, we aren't using the balancer on
this cluster of ours.
Andras
On 12/14/20 7:48 AM, Manuel Lausch wrote:
The ceph balancer sets upmap items which violates my crushrule
the rule:
rule cslivebapfirst {
id 0
type replicated
min_size 2
max_size 4
step take csliveeubap-u01dc
step chooseleaf firstn 2 type room
step emit
step take csliveeubs-u01dc
step chooseleaf firstn 2 type room
step emit
}
So my intention is, that the first two replicas are stored in the
datacenter „csliveeubap-u01dc“ and the next two replicas are stored in
the datacenter „csliveeubs-u01dc“
The cluster has 49152 PGs and 665 of them has at least 3 replicas in
one datacenter which is not expected!
One example on PG 3.96e
The acting OSDs are in this order:
504 -> DC: csliveeubap-u01dc, room: csliveeubap-u01r03
1968 -> DC: csliveeubap-u01dc, room: csliveeubap-u01r01
420 -> DC: csliveeubap-u01dc, room: csliveeubap-u01r02
1945 -> DC: csliveeubs-u01dc, room: csliveeubs-u01r01
This PG has one upmap item:
ceph osd dump | grep
3.96e pg_upmap_items 3.96e [2013,420]
OSD 2013 is in the DC: csliveeubs-u01dc
I checked this by hand with ceph osd pg-upmap-item
If I try to set two relicas in one room I will get a appropriate error
in the mon log and nothing happens. But setting it to a other dc worked
unfortunately.
I would suggest this is a ugly bug. What do you think?
ceph version 14.2.11 (f7fdb2f52131f54b891a2ec99d8205561242cdaf)
nautilus (stable)
Manuel
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx