> Op 11 augustus 2016 om 10:18 schreef Eugen Block <eblock@xxxxxx>: > > > Thanks for the really quick response! > > > Warning! These are not your regular log files. > > Thanks for the warning! > > > You shouldn't have to worry about that. The MONs should compact and > > rotate those logs themselve. > > I believe the compaction works fine, but these large LOG files just > grow until mon restart. Is there no way to limit the size to a desired > value or anything similar? > That's not good. That shouldn't happen. The monitor has to trim these logs as well. How big is your mon store? $ du -sh /var/lib/ceph/mon/* > > What version of Ceph are you running exactly? > > ceph@node1:~/ceph-deploy> ceph --version > ceph version 0.94.6-75 > 0.94.7 is already out, might be worth upgrading. Release Notes don't tell anything about this case though. > > What is the output of ceph -s? > > ceph@node1:~/ceph-deploy> ceph -s > cluster 655cb05a-435a-41ba-83d9-8549f7c36167 > health HEALTH_OK > monmap e7: 3 mons at > {mon1=192.168.160.15:6789/0,mon2=192.168.160.17:6789/0,mon3=192.168.160.16:6789/0} > election epoch 242, quorum 0,1,2 mon1,mon2,mon3 > osdmap e2377: 19 osds: 19 up, 19 in > pgmap v3791457: 4336 pgs, 14 pools, 1551 GB data, 234 kobjects > 3223 GB used, 4929 GB / 8153 GB avail > 4336 active+clean > client io 0 B/s rd, 72112 B/s wr, 7 op/s > Ok, that's good. Monitors don't trim the logs when the cluster isn't healthy, but yours is. Wido > > Zitat von Wido den Hollander <wido@xxxxxxxx>: > > >> Op 11 augustus 2016 om 9:56 schreef Eugen Block <eblock@xxxxxx>: > >> > >> > >> Hi list, > >> > >> we have a working cluster based on Hammer with 4 nodes, 19 OSDs and 3 MONs. > >> Now after a couple of weeks we noticed that we're running out of disk > >> space on one of the nodes in /var. > >> Similar to [1] there are two large LOG files in > >> /var/lib/ceph/mon/ceph-d/store.db/ and I already figured they are > >> managed when the respective MON is restarted. But the MONs are not > >> restarted regularly so the log files can grow for months and fill up > >> the file system. > >> > > > > Warning! These are not your regular log files. They are binary logs > > of LevelDB which are mandatory for the MONs to work! > > > >> I was thinking about adding another file in /etc/logrotate.d/ and > >> trigger a monitor restart once a week. But I'm not sure if it's > >> recommended to restart all MONs at the same time, which could happen > >> if someone started logrotate manually. > >> So my question is, how do you guys manage that and how is it supposed > >> to be handled? I'd really appreciate any insights! > >> > > You shouldn't have to worry about that. The MONs should compact and > > rotate those logs themselve. > > > > They compact their store on start, so that works for you, but they > > should do this while running. > > > > What version of Ceph are you running exactly? > > > > What is the output of ceph -s? MONs usually only compact when the > > cluster is healthy. > > > > Wido > > > >> Regards, > >> Eugen > >> > >> [1] > >> http://ceph-users.ceph.narkive.com/PBL3kuhq/large-log-like-files-on-monitor > >> > >> -- > >> Eugen Block voice : +49-40-559 51 75 > >> NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77 > >> Postfach 61 03 15 > >> D-22423 Hamburg e-mail : eblock@xxxxxx > >> > >> Vorsitzende des Aufsichtsrates: Angelika Mozdzen > >> Sitz und Registergericht: Hamburg, HRB 90934 > >> Vorstand: Jens-U. Mozdzen > >> USt-IdNr. DE 814 013 983 > >> > >> _______________________________________________ > >> ceph-users mailing list > >> ceph-users@xxxxxxxxxxxxxx > >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > > > -- > Eugen Block voice : +49-40-559 51 75 > NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77 > Postfach 61 03 15 > D-22423 Hamburg e-mail : eblock@xxxxxx > > Vorsitzende des Aufsichtsrates: Angelika Mozdzen > Sitz und Registergericht: Hamburg, HRB 90934 > Vorstand: Jens-U. Mozdzen > USt-IdNr. DE 814 013 983 > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com