I have a vm on a osd node (which can reach host and other nodes via the macvtap interface (used by the host and guest)). I just did a simple bonnie++ test and everything seems to be fine. Yesterday however the dovecot procces apparently caused problems (only using cephfs for an archive namespace, inbox is on rbd ssd, fs meta also on ssd) How can I recover from such lock-up. If I have a similar situation with an nfs-ganesha mount, I have the option to do a umount -l, and clients recover quickly without any issues. Having to reset the vm, is not really an option. What is best way to resolve this? Ceph cluster: 14.2.11 (the vm has 14.2.16) I have in my ceph.conf nothing special, these 2x in the mds section: mds bal fragment size max = 120000 # maybe for nfs-ganesha problems? # http://docs.ceph.com/docs/master/cephfs/eviction/ #mds_session_blacklist_on_timeout = false #mds_session_blacklist_on_evict = false mds_cache_memory_limit = 17179860387 All running: CentOS Linux release 7.9.2009 (Core) Linux mail04 3.10.0-1160.6.1.el7.x86_64 #1 SMP Tue Nov 17 13:59:11 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx