Hi Roland, Yes, you can. See mclock documentation here [1]. One think I can think of is that these 113 PGs may have a common misbehaving OSD (primary or not) with a ridiculous osd_mclock_max_capacity_iops_ssd value set. Restarting the primary and/or adjusting osd_mclock_max_capacity_iops_ssd value(s) could help in this situation. Regards, Frédéric. [1] https://docs.ceph.com/en/latest/rados/configuration/mclock-config-ref/ ----- Le 14 Nov 24, à 12:19, Roland Giesler roland@xxxxxxxxxxxxxx a écrit : > On 2024/11/14 11:44, Joachim Kraftmayer wrote: >> I know the similar behaviour when mclock is active. > > For osd.0 I see: > > osd.0 basic osd_mclock_max_capacity_iops_ssd 14305.161403 > > I'm unfamiliar with mclock. Can one tune that to improve the situation? > > Roland > > >> >> >> Joachim >> >> >> joachim.kraftmayer@xxxxxxxxx >> >> www.clyso.com >> >> Hohenzollernstr. 27, 80801 Munich >> >> Utting a. A. | HR: Augsburg | HRB: 25866 | USt. ID-Nr.: DE2754306 >> >> Roland Giesler <roland@xxxxxxxxxxxxxx> schrieb am Do., 14. Nov. 2024, 05:40: >> >>> On 2024/11/13 21:05, Anthony D'Atri wrote: >>>> I would think that there was some initial data movement and that it all >>> went back when you reverted. I would not expect a mess. >>> >>> data: >>> volumes: 1/1 healthy >>> pools: 7 pools, 1586 pgs >>> objects: 5.79M objects, 12 TiB >>> usage: 24 TiB used, 26 TiB / 50 TiB avail >>> pgs: 4161/11720662 objects misplaced (0.036%) >>> 1463 active+clean >>> 113 active+clean+remapped >>> 9 active+clean+scrubbing+deep+repair >>> 1 active+clean+scrubbing+deep >>> >>> I have 113 active+clean+remapped pg's that just stay like that. If I try >>> to out the this osd to stop it, the clsuter also never settles. So then >>> if I try to stop it in the GUI is tells me 73 pg's are still on the OSD... >>> >>> Can I force those pg's away from the osd? >>> >>>>> On Nov 13, 2024, at 12:48 PM, Roland Giesler <roland@xxxxxxxxxxxxxx> >>> wrote: >>>>> I created a new osd class and changed the class of an osd to the new >>> one without taking the osd out and stopping it first. The new class also >>> has a crush rule and a pool created for it. >>>>> When I realised my mistake, I reverted to what I had before. However, I >>> suspect that I now have a mess on that osd. >>>>> What happened when I did this? >>>>> >>>>> _______________________________________________ >>>>> ceph-users mailing list -- ceph-users@xxxxxxx >>>>> To unsubscribe send an email to ceph-users-leave@xxxxxxx >>>> _______________________________________________ >>>> ceph-users mailing list -- ceph-users@xxxxxxx >>>> To unsubscribe send an email to ceph-users-leave@xxxxxxx >>> _______________________________________________ >>> ceph-users mailing list -- ceph-users@xxxxxxx >>> To unsubscribe send an email to ceph-users-leave@xxxxxxx >>> >> _______________________________________________ >> ceph-users mailing list -- ceph-users@xxxxxxx >> To unsubscribe send an email to ceph-users-leave@xxxxxxx > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx