On 2024/11/13 21:05, Anthony D'Atri wrote:
I would think that there was some initial data movement and that it all went back when you reverted. I would not expect a mess.
data:
volumes: 1/1 healthy
pools: 7 pools, 1586 pgs
objects: 5.79M objects, 12 TiB
usage: 24 TiB used, 26 TiB / 50 TiB avail
pgs: 4161/11720662 objects misplaced (0.036%)
1463 active+clean
113 active+clean+remapped
9 active+clean+scrubbing+deep+repair
1 active+clean+scrubbing+deep
I have 113 active+clean+remapped pg's that just stay like that. If I try
to out the this osd to stop it, the clsuter also never settles. So then
if I try to stop it in the GUI is tells me 73 pg's are still on the OSD...
Can I force those pg's away from the osd?
On Nov 13, 2024, at 12:48 PM, Roland Giesler <roland@xxxxxxxxxxxxxx> wrote:
I created a new osd class and changed the class of an osd to the new one without taking the osd out and stopping it first. The new class also has a crush rule and a pool created for it.
When I realised my mistake, I reverted to what I had before. However, I suspect that I now have a mess on that osd.
What happened when I did this?
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx