I removed all entries with:
ceph tell mds.$filesystem:0 damage rm $id
so that cluster was no longer in error state. It didn't take much time
to have again new entries so that it was back to error state.
On 10/11/21 10:49, Vadim Bulst wrote:
ceph tell mds.scfs:0 scrub start / recursive repair force
--
Vadim Bulst
Universität Leipzig / URZ
04109 Leipzig, Augustusplatz 10
phone: +49-341-97-33380
mail: vadim.bulst@xxxxxxxxxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx