I had to make a cronjob to trigger compact on the MONs as well. Ancient version, though. Jan > On 11 Aug 2016, at 10:09, Wido den Hollander <wido@xxxxxxxx> wrote: > > >> Op 11 augustus 2016 om 9:56 schreef Eugen Block <eblock@xxxxxx>: >> >> >> Hi list, >> >> we have a working cluster based on Hammer with 4 nodes, 19 OSDs and 3 MONs. >> Now after a couple of weeks we noticed that we're running out of disk >> space on one of the nodes in /var. >> Similar to [1] there are two large LOG files in >> /var/lib/ceph/mon/ceph-d/store.db/ and I already figured they are >> managed when the respective MON is restarted. But the MONs are not >> restarted regularly so the log files can grow for months and fill up >> the file system. >> > > Warning! These are not your regular log files. They are binary logs of LevelDB which are mandatory for the MONs to work! > >> I was thinking about adding another file in /etc/logrotate.d/ and >> trigger a monitor restart once a week. But I'm not sure if it's >> recommended to restart all MONs at the same time, which could happen >> if someone started logrotate manually. >> So my question is, how do you guys manage that and how is it supposed >> to be handled? I'd really appreciate any insights! >> > You shouldn't have to worry about that. The MONs should compact and rotate those logs themselve. > > They compact their store on start, so that works for you, but they should do this while running. > > What version of Ceph are you running exactly? > > What is the output of ceph -s? MONs usually only compact when the cluster is healthy. > > Wido > >> Regards, >> Eugen >> >> [1] >> http://ceph-users.ceph.narkive.com/PBL3kuhq/large-log-like-files-on-monitor >> >> -- >> Eugen Block voice : +49-40-559 51 75 >> NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77 >> Postfach 61 03 15 >> D-22423 Hamburg e-mail : eblock@xxxxxx >> >> Vorsitzende des Aufsichtsrates: Angelika Mozdzen >> Sitz und Registergericht: Hamburg, HRB 90934 >> Vorstand: Jens-U. Mozdzen >> USt-IdNr. DE 814 013 983 >> >> _______________________________________________ >> ceph-users mailing list >> ceph-users@xxxxxxxxxxxxxx >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com