On 28/11/2022 23:21, Adrien Georget wrote:
Hi Xiubo,
I did a journal reset today followed by session reset and then the MDS
was able to start without switching to readonly mode.
A MDS scrub was also usefull to repair some bad inode backtrace.
Thanks again for your help with this issue!
Cool!
- Xiubo
Cheers,
Adrien
Le 26/11/2022 à 05:08, Xiubo Li a écrit :
On 25/11/2022 16:25, Adrien Georget wrote:
Hi Xiubo,
Thanks for your analysis.
Is there anything I can do to put CephFS back in healthy state? Or
should I wait for to patch to fix that bug?
Please try to trim the journals and umount all the clients first, and
then to see could you pull up the MDSs.
- Xiubo
Cheers,
Adrien
Le 25/11/2022 à 06:13, Xiubo Li a écrit :
Hi Adren,
Thank you for your logs.
From your logs I found one bug and I have raised one new tracker
[1] to follow it, and raised a ceph PR [2] to fix this.
More detail please my analysis in the tracker [2].
[1] https://tracker.ceph.com/issues/58082
[2] https://github.com/ceph/ceph/pull/49048
Thanks
- Xiubo
On 24/11/2022 16:33, Adrien Georget wrote:
Hi Xiubo,
We did the upgrade in rolling mode as always, with only few
kubernetes pods as clients accessing their PVC on CephFS.
I can reproduce the problem everytime I restart the MDS daemon.
You can find the MDS log with debug_mds 25 and debug_ms 1 here :
https://filesender.renater.fr/?s=download&token=4b413a71-480c-4c1a-b80a-7c9984e4decd
(The last timestamp : 2022-11-24T09:18:12.965+0100 7fe02ffe2700 10
mds.0.server force_clients_readonly)
I couldn't find any errors in the OSD logs, anything specific
should I looking for?
Best,
Adrien
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx