On 11/18/21 13:20, David Tinker wrote:
I just grepped all the OSD pod logs for error and warn and nothing comes up:
# k logs -n rook-ceph rook-ceph-osd-10-659549cd48-nfqgk | grep -i warn
etc
I am assuming that would bring back something if any of them were unhappy.
Your issue looks similar to another thread last week (thread pg
inactive+remapped).
What Ceph version are you running?
I don't know if enabling debugging on osd.7 would reveal something
Maybe recovery can be trigger by moving the primary to another OSD with
pg upmap. Check your failure domain to see what OSD would be suitable.
Gr. Stefan
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx