Hi, I'm not running multiple active MDS (1 active & 7 standby). I know about debug_mds 20, is it the only log you need to see bugs ? On 16/10/2018 18:32, Sergey Malinin wrote: > Are you running multiple active MDS daemons? > On MDS host issue "ceph-daemon mds.X config set debug_mds 20" for maximum logging verbosity. > >> On 16.10.2018, at 19:23, Florent B <florent@xxxxxxxxxxx> wrote: >> >> Hi, >> >> A few months ago I sent a message to that list about a problem with a >> Ceph + Dovecot setup. >> >> Bug disappeared and I didn't answer to the thread. >> >> Now the bug has come again (Luminous up-to-date cluster + Dovecot >> up-to-date + Debian Stretch up-to-date). >> >> I know how to reproduce it, but it seems very related to my user's >> Dovecot data (few GB) and is related to file locking system (bug occurs >> when I set locking method to "fcntl" or "flock" in Dovecot, but not with >> "dotlock". >> >> It ends to a unresponsive MDS (100% CPU hang, switching to another MDS >> but always staying at 100% CPU usage). I can't even use the admin socket >> when MDS is hanged. >> >> I would like to know *exactly* which information do you need to >> investigate that bug ? (which commands, when, how to report large log >> files...) >> >> Thank you. >> >> Florent >> >> >> _______________________________________________ >> ceph-users mailing list >> ceph-users@xxxxxxxxxxxxxx >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com