I have clusters that have been upgraded into "upmap"-capable releases, but in those cases, it was never in upmap mode, since these clusters would also have jewel-clients as lowest possible, so if you tried to enable balancer in upmap mode it would tell me to first bump clients to luminous at least, then allow upmap mode on the balancer. Den tors 12 dec. 2024 kl 14:37 skrev Matt Vandermeulen <storage@xxxxxxxxxxxx>: > > As you discovered, it looks like there are no upmap items in your > cluster right now. The `ceph osd dump` command will list them, in JSON > as you show, or you can `grep ^pg_upmap` without JSON as well (same > output, different format). > > I think the balancer would have been enabled by default in Nautilus, I'm > surprised this hit you now. You can make sure it's off with `ceph > balancer off` so that it won't do anything in the future, and check its > status with `ceph balancer status`. > > Thanks, > Matt > > > On 2024-12-12 08:37, Frank Schilder wrote: > > Dear all, > > > > during our upgrade from octopus to pacific the MGR suddenly started > > logging messages like this one to audit.log: > > > > 2024-12-10T10:30:01.105524+0100 mon.ceph-03 (mon.2) 3004 : audit [INF] > > from='mgr.424622547 192.168.32.67:0/63' entity='mgr.ceph-03' > > cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "1.60", > > "id": [1054, 1125]}]: dispatch > > > > Apparently, the balancer got enabled and tried to do something. > > However, we never enabled pg-upmap on our cluster, because we still > > have jewel clients from the museum connected. Therefore, I'm pretty > > certain that all of these upmap requests either failed or are scheduled > > and pending. > > > > To be sure, I would like to confirm that nothing happened. How can I > > list upmap items and scheduled+pending upmap operations? If there are > > any, how do I delete these? I really would like to avoid that these > > requests start hurting in a few years from now. I looked at the > > documentation. Unfortunately, its the usual disease[1], commands for > > setting all sorts of stuff are documented, but commands to query > > anything seem to be missing. > > > > This workaround > > > > [root@gnosis osdmaps]# ceph osd dump -f json-pretty | grep upmap > > "pg_upmap": [], > > "pg_upmap_items": [], > > > > indicates nothing is screwed up yet. However, I would really like to > > know what happened to the MGR commands and where they are now. How do I > > confirm they went to digital heaven? > > > > [1] There are "ceph osd pg-upmap-items : set upmap items" and "ceph osd > > rm-pg-upmap-items : clear upmap items" commands. Why would anyone ever > > need a "ceph osd ls-pg-upmap-items"?? I found out that I can write it > > myself > > (https://ceph-users.ceph.narkive.com/h7y24SDg/stale-pg-upmap-items-entries-after-pg-increase > > and > > https://gitlab.cern.ch/ceph/ceph-scripts/blob/master/tools/upmap/upmap-remapped.py#L102). > > However, a good API is always symmetric to make it *easy* for users to > > check and fix screw-ups. > > > > Thanks and best regards, > > ================= > > Frank Schilder > > AIT Risø Campus > > Bygning 109, rum S14 > > _______________________________________________ > > ceph-users mailing list -- ceph-users@xxxxxxx > > To unsubscribe send an email to ceph-users-leave@xxxxxxx > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx -- May the most significant bit of your life be positive. _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx