I think it was mentioned elsewhere in this thread that there are limitations to what upmap can do, especially in significant crush map change situations. It can't violate crush rules (mon-enforced), and if the same OSD shows up multiple times in a backfill then upmap can't deal with it. Creeping back up is a bit odd; if you have the balancer off, any chance there's somehow also a PG split going on? What does 'ceph osd pool ls detail' say? Josh On Tue, Dec 17, 2024 at 10:06 AM Janek Bevendorff <janek.bevendorff@xxxxxxxxxxxxx> wrote: > > Something's not quite right yet. I got the remapped PGs down from > 4000 > to around 1300, but there it stops. When I restart the process, I can > get it down to around 280, but there it stops and creeps back up afterwards. > > I have a bunch of these messages in the output: > > WARNING: pg 100.3d53: conflicting mapping 1068->1051 found when trying > to map 187->1068 > > There's maybe around 70-80 of them (definitely not 280 or 1300), any > idea how I can fix that? The messages all point to the same pool (our > largest one, I did not change the failure domain for this pool). > > > Ah, yes, we ran into that invalid json output in > > https://github.com/digitalocean/ceph_exporter as well. I have a patch > > I wrote for ceph_exporter that I can port over to pgremapper (that > > does similar to what your patch does). > > That'd be nice! > > > Janek > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx