Hello fellow ceph users and developers few days ago, I've update one our small cluster (three nodes) to kernel 4.1.15. Today I got cephfs stuck on one of the nodes. cpeh -s reports: mds0: Behind on trimming (155/30) restarting all MDS servers didn't help. all three cluster nodes are running hammer 0.94.5 on Centos 6, kernel 4.1.15. Each node runs 7 OSD daemons, monitor and MDS server (I know it's better to run those daemons separately, but we were tight on budget here and hardware should be sufficient) My question here is: 1) is there some known issue with hammer 0.94.5 or kernel 4.1.15 which could lead to cephfs hangs? 2) what can I do to debug what is the cause of this hang? 3) is there a way to recover this without hard resetting node with hung cephfs mount? If I could provide more information, please let me know I'd really appreciate any help with best regards nik -- ------------------------------------- Ing. Nikola CIPRICH LinuxBox.cz, s.r.o. 28.rijna 168, 709 00 Ostrava tel.: +420 591 166 214 fax: +420 596 621 273 mobil: +420 777 093 799 www.linuxbox.cz mobil servis: +420 737 238 656 email servis: servis@xxxxxxxxxxx -------------------------------------
Attachment:
pgp8CyUz8TVBF.pgp
Description: PGP signature
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com