I've been working on some improvements to our large cluster's space
balancing, when I noticed that sometimes the OSD maps have strange upmap
entries. Here is an example on a clean cluster (PGs are active+clean):
{
"pgid": "1.1cb7",
...
"up": [
891,
170,
1338
],
"acting": [
891,
170,
1338
],
...
},
with an upmap entry:
pg_upmap_items 1.1cb7 [170,891]
this would make the "up" list [ 170, 170, 1338 ], which isn't allowed.
So the cluster just seems to ignore this upmap. When I remove the
upmap, nothing changes in the PG state, and I can even re-insert it
(without any effect). Any ideas why this upmap doesn't simply get
rejected/removed?
However, if I were to insert an upmap [170, 892], it gets rejected
correctly (since 891 and 892 are on the same host - violating crush rules).
Any insights would be helpful,
Andras
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx