Hi Xiubo,
Thanks for your analysis.
Is there anything I can do to put CephFS back in healthy state? Or
should I wait for to patch to fix that bug?
Cheers,
Adrien
Le 25/11/2022 à 06:13, Xiubo Li a écrit :
Hi Adren,
Thank you for your logs.
From your logs I found one bug and I have raised one new tracker [1]
to follow it, and raised a ceph PR [2] to fix this.
More detail please my analysis in the tracker [2].
[1] https://tracker.ceph.com/issues/58082
[2] https://github.com/ceph/ceph/pull/49048
Thanks
- Xiubo
On 24/11/2022 16:33, Adrien Georget wrote:
Hi Xiubo,
We did the upgrade in rolling mode as always, with only few
kubernetes pods as clients accessing their PVC on CephFS.
I can reproduce the problem everytime I restart the MDS daemon.
You can find the MDS log with debug_mds 25 and debug_ms 1 here :
https://filesender.renater.fr/?s=download&token=4b413a71-480c-4c1a-b80a-7c9984e4decd
(The last timestamp : 2022-11-24T09:18:12.965+0100 7fe02ffe2700 10
mds.0.server force_clients_readonly)
I couldn't find any errors in the OSD logs, anything specific should
I looking for?
Best,
Adrien
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx